There are a number of situations in which a DBA may need to deterimine when data in a table is being inserted or changed. The most obvious tool for tracking this may be using the db2 auditing facility. If you haven’t looked at audit for a while, it has been significantly improved in more recent versions, and you can limit the scope of auditing to a single table.
There’s another quick and dirty method too – using triggers on every operation against the table to write data to a separate table.
What to Track
There are two different directions to go on tracking data, and the direction chosen depends on the goals of tracking the information and on the structure of a particular table.
The first direction is to create a very simple table that has the primary key of the table being tracked, plus two extra columns to tell you when a change is made and what kind of change. This can help if you’re not understanding when changes are happening and need to know.
The second direction is to track every column with before and after information for updates so you can understand exactly what is happening and when it is happening. This works better than the first method if you need more detail, or if there are many changes and you want to understand what the sequence of them is. This direction will require your tracking table to have every column that the table being tracked has, plus two or three additional columns. This can be cumbersome for a particularly large table.
Creating Tracking Tables
There are a couple of shortcuts to creating the tables you need, assuming you are taking the second direction described above. The harder way is to grab db2look information for the table, and then take the create table syntax and alter it for table name and to include the additional columns needed. In a recent example, I pulled table information from db2look that looked like this:
CREATE TABLE "WSCOMUSR"."TI_DELTA_CATENTRY" ( "MASTERCATALOG_ID" BIGINT NOT NULL , "CATENTRY_ID" BIGINT NOT NULL , "ACTION" CHAR(1 OCTETS) , "LASTUPDATE" TIMESTAMP NOT NULL WITH DEFAULT CURRENT TIMESTAMP ) IN "USERSPACE1" ORGANIZE BY ROW;
Note that it’s unlikely that you want to replicate the indexes that are on the table being tracked. You may need indexes if performance matters or if you’re worried about flooding your bufferpools with table scans. Those decisions depend on how long you will be using the table and how much data you expect. In the example I’m using here, IBM WebSphere Commerce support requested that we track this data in detail. I don’t expect a lot of data, and I only expect to have the tracking in place for a couple of days or a week to try to catch a specific scenario.
I’m not even adding a primary key to the table, which actually drives me nuts – there’s a little voice inside me screaming “Every table must have a primary key!” But the goal is as little impact when the triggers fire as possible, and so I’m leaving off all indexes and constraints for the moment.
Using the above syntax as a starting point, I came up with the following syntax for the table I’ll be using to track changes:
CREATE TABLE DBA.X_CHG_TI_DELTA_CATENTRY ( action_timestamp timestamp not null with default current timestamp, action_type char(1), before_or_after char(1), "MASTERCATALOG_ID" BIGINT NOT NULL , "CATENTRY_ID" BIGINT NOT NULL , "ACTION" CHAR(1 OCTETS) , "LASTUPDATE" TIMESTAMP NOT NULL WITH DEFAULT CURRENT TIMESTAMP ) IN DBA32K ORGANIZE BY ROW;
I have noted the major changes in red above. Of course the table name must be different. And I’ve added three columns. The first will note when the trigger was fired, the second will record whether this is an insert (‘I’), update (‘U’), or a delete (‘D’). The third will track whether the data stored in the change tracking table represents the before (‘b’) or after (‘a’) data – which I want to track for updates – not just the new or old values. There are obviously multiple ways of handling this, and I’m not claiming this is better than others.
A faster way to create the tracking table would be to create a table like an existing table:
CREATE TABLE DBA.X_CHG_TI_DELTA_CATENTRY LIKE WSCOMUSR TI_DELTA_CATENTRY IN DBA32K
Then just alter the new table to add the additional columns:
alter table DBA.X_CHG_TI_DELTA_CATENTRY_temp add column action_timestamp timestamp not null with default current timestamp add column action_type char(1) add column before_or_after char(1);
Creating Tracking Triggers
Once we have the change tracking tables, the triggers are the next thing needed. We need three triggers – one for update, one for insert, and one for delete. If this were code that would have to be maintained over time, we might consider the use of a multi-action trigger instead.
Here are the triggers as I defined them for my little support-requested scenario:
CREATE TRIGGER T_CHG_TI_DELTA_CATENTRY_I01 AFTER INSERT ON WSCOMUSR.TI_DELTA_CATENTRY REFERENCING NEW AS N FOR EACH ROW BEGIN ATOMIC INSERT INTO DBA.X_CHG_TI_DELTA_CATENTRY ( action_timestamp, action_type, before_or_after, MASTERCATALOG_ID, CATENTRY_ID, ACTION, LASTUPDATE) Values ( current timestamp, 'I', 'A', N.MASTERCATALOG_ID, N.CATENTRY_ID, N.ACTION, N.LASTUPDATE ); END@ CREATE TRIGGER T_CHG_TI_DELTA_CATENTRY_U01 AFTER UPDATE ON WSCOMUSR.TI_DELTA_CATENTRY REFERENCING NEW AS N OLD AS O FOR EACH ROW BEGIN ATOMIC INSERT INTO DBA.X_CHG_TI_DELTA_CATENTRY ( action_timestamp, action_type, before_or_after, MASTERCATALOG_ID, CATENTRY_ID, ACTION, LASTUPDATE) Values ( current timestamp, 'U', 'B', O.MASTERCATALOG_ID, O.CATENTRY_ID, O.ACTION, O.LASTUPDATE ); INSERT INTO DBA.X_CHG_TI_DELTA_CATENTRY ( action_timestamp, action_type, before_or_after, MASTERCATALOG_ID, CATENTRY_ID, ACTION, LASTUPDATE) Values ( current timestamp, 'U', 'A', N.MASTERCATALOG_ID, N.CATENTRY_ID, N.ACTION, N.LASTUPDATE ); END@ CREATE TRIGGER T_CHG_TI_DELTA_CATENTRY_D01 AFTER DELETE ON WSCOMUSR.TI_DELTA_CATENTRY REFERENCING OLD AS O FOR EACH ROW BEGIN ATOMIC INSERT INTO DBA.X_CHG_TI_DELTA_CATENTRY ( action_timestamp, action_type, before_or_after, MASTERCATALOG_ID, CATENTRY_ID, ACTION, LASTUPDATE) Values ( current timestamp, 'D', 'B', O.MASTERCATALOG_ID, O.CATENTRY_ID, O.ACTION, O.LASTUPDATE ); END@
If you’re looking at a more long-term solution, you might also want to consider temporal tables. The reason I didn’t consider it here is that it requires adding columns to the table being tracked, and that was not an option for me. I only do this a couple of times a year, and I’m sure I have readers who do it all the time. Please share any suggestions and advice in the comments box at the end of this article.
My favorite syntax guide is Graeme Birchall’s: http://mysite.verizon.net/Graeme_Birchall/id1.html, though he does not intend to update it past 9.7. Triggers are on page 333.
IBM DB2 Knowledge Center Page on CREATE TRIGGER statement
Article on Temporal Tables: http://www.ibm.com/developerworks/data/library/techarticle/dm-1210temporaltablesdb2/index.html?ca=dat
I have a question regarding db2 replication. Maybe you can help me..
I have a table replicated from database A to database B.
Right now, I am trying to add a new column to the table on db A, called PROPOSAL_SENT, with default value ‘N’.
I register the new column for replication, and after I start back replication, I want to make an update on the table from db A: to update the value of N to be Y for some rows.
The changes apply successfully on db A, but on db B, replication instead of moving the Y values, replaces the N with – .
And I cannot figure it out what is happening.
Thanks in advance for any piece of advice.