Data Movement Options for Commerce Databases – Synchronizing Data Between Commerce Environments

So one of the most common questions I get is about moving data between homogenous Commerce databases. Our standard setup includes 4 environments – Dev, QA, Stage, and Prod. Dev/QA are a stagingprop pair, Stage/Prod are a stagingprop pair, and Prod usually has HADR. So with 4 environments, they can get out of sync. For the most part, business users are making changes on staging, and stagingprop is used to move those changes to prod. Therefore the catalog between staging and prod is kept in sync, and we know right away if stagingprop fails and it gets out of sync. Some data we never expect in an environment other than prod – and in fact do not want it to end up in the lower environments where many developers have unrestricted access, and security is not as tight. That data includes orders, users, payment information, etc.

I hope to present a series of postings to address a number of scenarios and approaches for this kind of data movement. I’m not talking in this series about data movement as in loading catalog or user data into the database for the first time, or keeping things in sync with a separate system of record.

So the most common set of goals are:

  1. Keep private data in production only
  2. Synchronize catalog and other data across environments
  3. Staging is the master for catalog and data that is needed in all environments

Data movement Scenarios:

Synchronizing Data Between Commerce Environments

Ok, so the idea in this post is that you’ve got your Commerce Environments up and running – with a synchronized and staginproped stage/prod pair and one or more development/test/qa environments as well. I frequently get an email or ticket saying something like “Copy production to devint by 4pm today”. That is rarely doable or a complete request.

In all of the methods below, your target environment will need to be down (or will not work right) while the copy is in progress.

DB2 Database level backup/restore

One of the first concerns is private data. Whether you’re storing card data and it’s PCI you’re worried about or just plain protection of customer data such as email addresses and other data that must be kept confidential. Our security set up means extremely restricted permissions on who can query and change data in production. For that reason, after go-live and/or data load, I don’t recommend doing a full database backup and restore from production to any other environment. Most of the time, what is really needed is just the data that is part of stagingprop – what I refer to as ‘catalog’ data – not the database catalog, but the “product” catalog of the site. Because your staging environment is kept in sync with production on this data, frequently a full database backup and restore from staging can be done.

There are some considerations with a backup of the database for one environment and the restore into another:

  1. The Commerce version and FixPack and FeaturePack and Feature Enablement must be exactly identical. If they are not, you will completely mess up your environments and frequently end up with an environment with a feature or FixPack only partly applied and no way to fully apply it or to back it out. I have ended up rebuilding environments from scratch when this happens – it’s not fun.
  2. The Merchant Key must be the same. Whether you keep the Merchant Key in sync between two or more environments or you temporarily change it to be identical, if you restore with different Merchant Keys, any encrypted data will be completely unusable, and you may not realize it right away.
  3. If you’re restoring a staging database to production (or to an environment that is a target of a stagingprop pair), you’ll want to drop the staging triggers after the restore on the target. If you don’t, you’ll be unable to delete members and performance may suffer
  4. If you’re restoring from production or an environment that is the source of a stagingprop pair into a staging database or a database that is the target of a stagingprop pair, you’ll want to create the stagingprop triggers (and possibly sequences on commerce 7) after the restore
  5. If you doing keysplitting or anything fancy there, you’ll want to get the keys table to the proper state – possibly by exporting it, restoring, and then importing the keys table back in.

 Other than those, backup/restore is a quick and easy way to move data, and is my preferred option when the items above can be dealt with. I do still backup the target database before doing the restore so I have a solid fall-back.

Non-standard use of stagingcopy

sshhh, don’t tell IBM. But you can use stagingcopy other than as intended. IBM intends that you use stagingcopy exactly once – when you first create your staging server. But what happens if I specify any databases I choose when executing it? Well, it still copies that critical data. As I mentioned previously, stagingcopy has drawbacks. It is slow – running deletes and imports on dozens of tables as a single unit of work. So you’ll have to increase the size of the log files on your target, bumping LOGSECOND up to 255-LOGPRIMARY and possibly increasing LOGFILSZ as well. Also stagingcopy deletes users and some other things you might have expected to keep so be sure to test the results to make sure they’re what you need. Stagincopy will not work for especially large databases – you just won’t be able to make the transaction log files large enough.

Stagingcopy does create the staging triggers (and maybe sequences in Commerce 7?) on the target database when it is done, so you would want to drop them if the target database is not a staging environment.

Stagingcopy can also be used to generate a script (with the -script_file option). If you need to or want to make changes to what stagingcopy is doing, that script is an excellent way to do it. That is actually my preferred method for developers moving data to their personal development environments – the scripts and export files are usually something that can be moved over to Windows implementations of Commerce from the Linux or Unix systems.

A short list of tips…

  1. remember to take a backup of the target database before starting the stagingcopy – it is very likely stagingcopy will fail and you’ll have to start again. I can’t tell you how many times I’ve used that backup even with my extensive experience.
  2. Run your process in the background (I like starting the command with ‘nohup’ and ending it with ‘&’), so that if you lose your connection, you won’t have to start over.
  3. Make your logs as large as you possibly can (given disk limits), increasing LOGSECOND and/or LOGFILSZ db cfg parameters
  4. Have a person actually watching/checking in every few minutes to catch failures as soon as possible
  5. Set expectations that the process will take a while
  6. Take a backup of the target database when you’re done so that you have a restore point moving forward

Traditional database methods: db2move, load, etc

Somewhere out there, a DBA is reading this, and is thinking “well, I’d just use db2move”. Ok, so I’ve done it that way – using db2move or a custom script of load statements. But you’d better have a script ready that will go through and determine the tables in check pending and take them out of it and report any violations directly to you, because a simple load on the member table for example will put approximately 300 tables into check pending. And in my humble opinion, that’s just too many to run set integrity statements on manually. Usually if I end up at that method, I spend 16 hours all of it working closely with a developer to work through all the exceptions. And believe me, even if you have the mentioned script, you’ll likely have exceptions.

The RI (Referential Integrity) in a Commerce database is simply insane (Why, oh why does each item on an order have to have a member id associated with it in addition to the member id associated with the order?). I bet I could find three tables that would put 90% of the database into check pending – yes, that’s over 600 tables. Until you’ve done it, you just won’t understand the scale. So go ahead and come up with a list of tables and the right order if you’re not using LOAD and the script for set integrity, and then make sure someone tests the heck out of it. And don’t lose those because if it’s needed once, it’ll likely be needed again.

This is not my favorite method, though I have used it and probably will in the future – there are some cases where apparently only it will do – you can finely control which tables are moved, if you can avoid causing exceptions to RI constraints. If you have to go this route, starting with a script file from stagingcopy(and altering it to meet your needs) is a good way to go, because it gets you the correct order and makes you think the way that stagincopy runs.

A short list of tips…

  1. Take a backup of the target database before starting – having something to fall back to can be a lifesaver.
  2. If possible, generate and start from the script file generated when using the -script_file option on stagingcopy. If you’re not using LOAD, you’ll need this to get tables done in the correct order
  3. If you’re using LOAD be prepared with a script to help you set integrity and dump conflicts to an exception table. Set integrity statements will take longer than the load itself in all likelihood.
  4. Have someone test thoroughly when you’re done – nothing worse than discovering weeks later that there was an issue and trying to get everything fixed then.
  5. Take a backup of the target database when you’re done so that you have a restore point moving forward
Ember Crooks
Ember Crooks

Ember is always curious and thrives on change. She has built internationally recognized expertise in IBM Db2, spent a year working with high-volume MySQL, and is now learning Snowflake. Ember shares both posts about her core skill sets and her journey learning Snowflake.

Ember lives in Denver and work from home

Articles: 544

3 Comments

    • Using stagingprop between environments that are not set up for it usually does not work. Using stagingcopy with the caveats in this post works with many environments, as long as it’s ok to lose all the users and you have gigantic log files or the very most recent WCS fixpack and use stagingcopy in place without using the script method – as it actually does interim commits! I recently spent a whole week on on a stagingcopy in a running environment so it can be excessively time consuming.

      I’m finding the restore method harder and harder. First search configuration started to be stored in the environments, and now if the content versioning feature is enabled in one environment and not the other, I haven’t figured out yet how to make the restore method work.

  1. Hello
    I have a requirement to move the data related to marketing content from PROD to DEV/QA. We dont have a staging server in place.
    Do you have any idea of how it could be done.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.