This post is specific to WebSphere Commerce. In my experience, usually when Clients upgrade from one WebSphere Commerce version to another (with 6 to 7 being the current focus), they also upgrade or change DB2, the OS, the Hardware – basically everything. To that end my upgrade experience assumes moving at least from one server to another. I’m also not providing a recipe here, but sharing a specific experience – your experience may be different.
In this scenario, we were moving one store from a multi-store Commerce implementation. V6 was using DB2 8.2 on AIX. V7 is using DB2 9.7 on Linux. This of course, means that database restores are right out the window. Luckily we’re talking about less than 15 GB of data. In this scenario, we’re also only moving users – Catalog data was loaded from scratch for other reasons, and the site in question does not store orders longer than it takes to pass them on to another system.
We picked our QA server to do testing on. For this kind of data move, you want to make sure you have time for at least two iterations of testing, because the question is not whether you’ll run into issues or not, but rather which issues you’ll run into. Each testing cycle should include as thorough testing as is possible.
The High Level Plan
The high level plan looked like this:
- Backup v6 prod database
- Use db2move to copy v6 prod database to transitional v7 database
- Misc corrective SQL
- Run Commerce utility to migrate datbase from v6 to v7
- Misc corrective SQL
- Export data from Transitional database
- Backup v7 prod database
- Import data into v7 prod database
- Final Backups
Theoretically, I could have dug out only the SQL related to the user tables and just run instead of using /opt/IBM/WebSphere/CommerceServer70/bin/wcim_ant.sh -tier db -action migrate on a full copy of the database, but the problem is that it’s not just the version SQL, but also the FixPack SQL to worry about. Also if I apply it to the whole database, then I also have other tables handy and ready to grab if it is later discovered that something was missed.
I initially attempted to run the Commerce migration tool on a Database from a Commerce 7 server against the old AIX server, but I quickly discovered that this did not work and was not supported cross-os/cross db2 version, so I brought the whold database over with db2look/db2move. This was feasible because the database was of a reasonable size and didn’t include orders.
Make sure that your dbclean is current, and that you’re only pulling the minimal database users over. For example there was absolutely no need to move guest users over, as in our setup, there was no way for a user to return to the guest user across the site migration anyway.
Data transformation – you’re pretty likely to change the store numbers (whether on purpose or not), so you’ll want to be aware of this when moving the mbrattrval and any other tables that reference the store number. It was realatively easy to populate the right store number on export, but would have been harder to do this on import or after import.
Delta moves of data can be more difficult and time consuming than the original load, so if possible, start with a clean target database. This may involve clearing out some test users before the final data move.
If you want to keep users’ passwords the same, you either have to match the merchant keys of the old and new systems, or include appropriate merchant key changes during the data movement. We initially were planning on the second method, but it added 2 hours onto our already 10 hours worth of work to fit into a 12 hour window, so we decided to match the merchant keys, and then change them shortly after go-live. This may vary from implementation to implementation.
This list will change depending on what features you’re using, and our list was only to move users, not orders. But here are the tables we moved – all had a where clause excluding at least some data:
I also worked with the developers on over 2 hours worth of “misc sql”. Some of that was fixing data issues (dealing with ‘ in the data, which our initial SQL did not, removing a few ^M’s, and more logical manipulations that were discovered in testing.
If you’re going through a similar process, don’t forget runstats at the end too – you’ll want to run “with distribution and detailed indexes all” for all non-volatile tables.
With our detailed testing, this particular one went pretty well, but it was 10 hours solid of me running over 48 individual steps. Never, ever attempt this without thorough testing.