Ok, so everyone will tell you that it doesn’t matter if you do runstats on volatile tables. The volatile flag is supposed to tell DB2 that the statistics are absolutely not correct, and heavily bias db2 to use indexes over table scans. Commerce has 8 tables in Commerce 6 (9 in Commerce 7) that are marked as volatile. At least two of them I would argue do not meet the definition for volatile, but that’s another post. We have seen clients with severe performance issues when they did runstats on volatile tables, so our scripts do not do runstats on volatile tables and advise all of our customers not to as well.
One particular client has some non-Commerce databases that we provide some tier-3 support for. One of those databases had a cripiling performance issue today that was causing thousands of deadlocks per hour. Ultimately the issue turned out to be that when runstats was done on it at 6 AM this morning, the table all of the deadlocks were on had 0 rows in it. And when this issue was occuring, there were about 4,000 rows in it.
Clearly they don’t follow our advice to skip runstats on volatile tables alltogether, but when they did runstats again on the table mid-morning, the problem resolved. So clearly DB2 was not ignoring the runstats at all. When I explained the problem query while the problem was occuring, the only index being used was the primary key – the where clause included three columns, none of which was the primary key. And there was an index with precisely those three columns that was not being used. After runstats, the query immediately started using the index designed for it, and the database went back to normal with no deadlocking.
My conclusion is that DB2 (ok, so this is 8, FP 18) does not properly respect the volatile flag and that runstats sould either never be done (my preference), or should be done with realistic values on the higher end of normal for the table, and then not done again.