Designing a Backout Strategy for DB2 Database Changes

When I was a kid, I was always the one when my family went to a mall or amusement park or something to say “If we get separated, we’ll meet here”. I’ve carried that early caution into my adult life. Often, I have two or three different levels of backout strategy – even for simple database changes.

While it is true that one must tailor the backout strategy to the specific change, there are several categories of changes we can talk about. And just a bit of advance planning will save you hours of work if a backout is required.

Note: I’m not talking about FixPack backouts in this post, though I was the queen of those for a while when on a vended database that was very sensitive to such changes, and may cover it in a future post.

Types of Database Changes

Some database changes are going to be complex combinations of the types below, and others may be a single statement.

Inserting Data

Some changes may consist simply of inserting new data into one or more tables. Obviously your e-commerce application (WebSphere Commerce or other) is going to be changing data constantly. But there are other tables that for whatever reason, you’ll have to manually insert data into. The most obvious antidote to inserting data is simply deleting it.

Updating Data

Many changes involve updating rows in the database. Again, our e-commerce application (WebSphere Commerce or other) is going to be changing data constantly. If you’re updating data manually, the important thing is to have some way of defining what data was changed and how it looked before the changes.

Deleting Data

Removing data from the database is another data-related change. Again, our e-commerce application (WebSphere Commerce or other) is going to be changing data constantly. If you’re deleting data manually, then you need to ensure both that you have a before-image of the data, and also that you’re aware of any cascading deletes. Cascading Deletes can be a real issue in WebSphere Commerce databases.

Adding Objects

While more often true for custom-designed databases than vended databases like WebSphere Commerce, the chances are that at some point, you’ll end up adding tables, views, indexes, triggers, or other objects. The main question to ask yourself here is IF you even need to remove objects if the change is backed out. In some cases with added objects, you can just let them remain in the database until the change is attempted again. I frequently add objects ahead of the actual application change that will make use of them.

Altering Objects

Altering existing objects happens in WebSphere Commerce databases too, though is more likely in custom designed databases. Often “altering” may be accomplished through dropping and recreating rather than an “ALTER” statement. Either way, it is important to understand backing out the alteration – and any data-retention steps that may be needed if the object is a table.

Dropping Objects

“Drop” still makes me stop for a minute and double check my syntax and that I’m on the right database/server. It’s a bit final. Having the syntax to re-create a dropped object is usually not difficult.

Database/Database Manager/Registry Configuration

Depending on your organization, you may be allowed to change parameters as needed on the fly or you may have to go through hours of meetings just to justify a bufferpool tweak. In either case, it’s important to note WHEN such changes are made and the previous values so you can easily back them out.

Change Failure Scenarios

Failure is in the eye of the beholder. It can mean different things to different people. Typically, my developers have their own development VMs where they work on things. Then they pull me in for making database changes in Dev (and of course for Stage, Prod, etc). So frequently the first I hear of it is a script or series of SQL statements. I review these carefully. I frequently ask questions and even tell them they can’t do something the way they’re doing it. When it comes to actually executing the change, there’s always someone to test – both regression testing and testing new functionality – usually as an end user.

Actual Statement Failures

Perhaps the rarest, you go to run something and it simply doesn’t work. Of course most often, this is a simple syntax error that you can resolve. They didn’t specify the schema or forgot a semi-colon or whatever. Your next line of defense is to go back to whoever wrote it and get them to fix it. I’ve had production changes I had to abort because of this kind of failure with the change requester being unavailable to clarify what they actually meant.

You Broke It!

The change looked smooth – no SQL error codes, everything ran, but immediately after the change, your testers report that not only is it not working, but the site or app doesn’t even work as well as it used to. There can be hours of investigation depending on what kind of changes were done.

It Doesn’t Work the Way We Expected It To

Sure, they tested it for weeks in dev. But go to run it in prod and someone notices that they forgot something important. Backout happens more often for “business” reasons than for things actually being technically broken.

It Works, but Performance is Bad

If I had a nickel for every time I’ve heard “It worked on my VM”. Some code just doesn’t stand up to load, and one of the frustrating aspects of this kind of failure is that it may not appear right away.

I Know it Has Been 3 Weeks, but That Change Caused a Problem

Sometimes problems don’t get noticed for days or weeks. Some changes require a detailed strategy as to what will be done if a backout is decided on after transactions have already run against the new structures. This can be the most difficult type of backout for a DBA – can you sort out the data and shove it back into its old format?

Backup And Restore – Always Have it in Your Back Pocket

Restoring is usually my last resort in backout plans. Depending on complexity, it may be your only option. If you ran scripts that touched hundreds of rows in dozens of tables or did a lot of altering of exising objects, restore is a good choice. But restore is like the sledgehammer. My WebSphere Commerce databases have over 800 tables in them. Even a complex change is rarely going to touch more than 50 of them. Restoring all 800 tables is more than you need to do.

The other problem with restore is that if you’re making changes while applications are accessing the database, then you will also lose any transactions or data those other applications added or changed in the meantime.

Restore also is less of an option the larger your database is. I don’t currently have a database it would take me longer than an hour for the actual restore to run with minimal rollforward.

But the good thing about restore is that it always works to get you back to where you were.

If this is one of your tiers of backout, then you most likely want to take a backup (online often works) immediately before making a change. While I still struggle to get developers to understand that I can restore the database to nearly any second in the last two weeks, it’s still easier and faster just to restore the backup and rollforward to the end of the backup than to have to mess with which backup image(s) and log files and rolling forward through more. In the real-world disaster restore scenarios I’ve seen, the rollfoward takes far longer than the restore.

Export/Import – Only the Data you Need

If it’s a data change, and not too complicated, you can export only the data you would need to import for a backout, tailoring your import statements to the changes being made.
For example, If I was doing the following updates:

update catgrpdesc set keyword = 'category_foobar:FooBar,parentCategory:S' where catgroup_id = 10000074;
update catgrpdesc set keyword = 'category_Doohickies:DooHickies,parentCategory:S' where catgroup_id = 10000917;
update catgrpdesc set keyword = 'category_Widgets:Widgets,parentCategory:S' where catgroup_id = 20001;
update catgrpdesc set keyword = 'category_ThingAMaBobs:Thing A Ma Bobs,parentCategory:AP' where catgroup_id = 10000910;

Then I would use an export statement like this to export the specific rows:

db2 "export to catgrpdesc.mychange.del of del select * from schema.catgrpdesc where catgroup_id in (10000074,10000917,20001,10000910) with ur" 

Being Me, I’d probably review it 3 times to make sure I had the right catgroup_ids, too.

Then if an actual backout was called for, I would insert_update the old rows back into the table, using:

db2 "import from catgrpdesc.mychange.del of del insert_update into schema.catgrpdesc"

Note that this method doesn’t work so well for backing out inserts. You would have to have delete statements for backing out inserts

This is usually my first choice on backout if the level of complexity allows it.

Export/Import – Whole Tables

If it’s at a level of complexity that I’m not sure I can get it all with the above method, I might export the whole tables. I also export whole tables in addition to the above method, just in case I missed something. Exporting entire tables is a good method when you’re dealing with about 20 or fewer tables AND you know that the RI will actually allow you to re-import them.

The reason I add that caveat is because the ideal import method for such exports is REPLACE, and REPLACE cannot be used for tables with foreign keys referencing them. Which is the vast majority of tables in a WebSphere commerce database. You can still use this method and choose insert_update instead, but that will not handle inserted rows, and you would again need to have specific delete statements to deal with those inserted rows.

db2look – Your Best Friend When Recreating Objects

Whenever I’m dealing with changing objects and not just data, I take a full db2look of the database. At least in previous versions, the syntax allowing you to restrict db2look to specific objects or subsets of objects was particularly buggy, so I never count on it. If you have a full database db2look, you can always pull out the syntax you need. My favorite db2look syntax is:

db2look -d dbname -e -a -x -o db2look.ddl

This tells db2 to extract the ddl for all objects for all users, including grants. There are a few objects it doesn’t grab with this syntax, like tablespaces, so if you’re altering tablespaces or aren’t sure, make sure the ddl for your objects is in the output file.

Ember Crooks
Ember Crooks

Ember is always curious and thrives on change. She has built internationally recognized expertise in IBM Db2, spent a year working with high-volume MySQL, and is now learning Snowflake. Ember shares both posts about her core skill sets and her journey learning Snowflake.

Ember lives in Denver and work from home

Articles: 544

4 Comments

  1. Ember,

    I listened to a recording of your DB2 Night Show on this topic recently and enjoyed it very much.

    Firstly, I have a question:-

    When preserving the current data before an insert, update or delete statement, why do you export the data rather than copying it to a backup copy of the table within the database?

    Secondly, I’d like to share my thoughts on exporting data for the purpose of backing out a DML change:-

    I have experienced problems in the past when importing delimited data back into a table. Issues such as the delimiter character and newline characters appearing in the data can cause problems. For that reason I always prefer to export in IXF format.

    When using one statement to export the data and another to delete or update it, there is a small risk that the WHERE clause on the two statements may not be identical. One technique I particularly like is combining the export, select and delete/update statements as shown in the example below.

    export to backup.ixf of ixf
    select *
    from old table
    (
    delete from
    where …
    );

    This provides a fool-proof way of exporting exactly the rows that have been deleted or updated.

    Regards,

    Mike

    • First, so sorry for the late response. My spam filter and I seem to be having some relationship issues that I’ve just become aware of.

      I generally prefer .del because it is easily human readable, and some backouts may require me to actually go through the data to pull out a subset of it. Yes, I could also create a table to do that, I suppose, but I have had developers ask for a csv of the data before and it’s nice to just have it. I have never had issues with the data in del format, but I admit to exporting in both formats for some changes, just because I like options to be available on backout.

      The main reason I don’t create “shadow” tables is that some of my clients object to tables being created or require change management around creating a table – so I avoid it pretty strongly. But it’s certainly a valid approach if it works for your environment.

      The idea of selecting from the delete is intriguing to me. If I can get past my desire to have my backout data in place before actually performing the change, that might work nicely. I’ll have to play with it a bit and see how I like it.

      Thanks for the comment – interesting stuff here.

  2. Mike/Ember, Thanks both for sharing techniques. Quiet useful and interesting to know what people out there do.

    I prefer Mike’s idea of exporting to ixf – and if needed the data into a table (in the same database or a test database, depending on the need) . But in the context of Commerce, I believe csv is good too.

    ixf gives platform portability and is much easier for a quick import with CREATE_REPLACE option, IMHO .

    Mike, I have used your technique of exporting from OLD TABLE occasionally , but I am not very comfortable with it yet. My concern is – What happens if the export fails , say, filesystem is full in the middle of the export ? Does the delete carry on nevertheless or does it roll back, given that EXPORT cannot be called within the context of UOW (it does internal commits as far as I understand), but correct me if I am wrong . If the export fails during initialization (eg. you are writing to a read only filesystem) , the delete statement does not happen

    Going back to Mike’s point of copying data into another table within the same database, you cannot use INSERT INTO BACKUP_TABLE SELECT * FROM OLD TABLE(DELETE FROM …) . A work around is to load with cursor
    DECLARE C1 CURSOR FOR SELECT * FROM OLD TABLE(DELETE FROM …)
    LOAD FROM C1 OF CURSOR INSERT INTO BACKUP_TABLE

    Again, the problem is what happens if this fails.

    Note to Ember: Feel free to moderate. if my response is not relevant to your main topic. apologies.

    • I only moderate comments because I get a lot of spam comments. I approve every comment that’s legitimate except for the few where the poster just wants to get in contact with me rather than actually comment on the topic. I love every real comment!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.