The Least you can do for DB2 Performance

I suspect there are some sites out there where a “set-it-and-forget-it” approach is taken for DB2 related performance. Many clients we go into don’t even realize the need for having a DBA. Depending on site size, that may work for a while. I take an active and proactive role in performance for any client I can. There are some nifty features in db2 9.7 (well, 9.5 too) which is what comes with Commerce 7, that make some kinds of tuning require less work (assuming you configured correctly for them), but there are still things to do and work on. I don’t believe you can ever be complacent about db2 performance. So here is what I consider the very least I can do for DB2 performance (for all db2 databases, not just Commerce ones).

Runstats

I don’t trust the db2 automation on this. I want to know that every table is getting runstats at least once a week (more often during times of growth like immediately post go-live. I also want to control the syntax used. My favorite syntax is:

db2 runstats on table schema.table with distribution and detailed indexes all

I also prefer to not do runstats on volatile tables. We’ve seen severe performance issues that were caused by runstats on volataile tables. This is thoroughly undocumented and I’m probably in disagreement with IBM on this part of it.

Turn on Monitor Switches

The monitor switches control what data db2 collects. Yes, they can be turned on at the command line for a particular session, but if you set the default settings for them in the DBM CFG, then you’ll have DB2 collecting the data you’ll need if you run into a performance problem. To check them:

$ db2 get dbm cfg |grep DFT_MON
Buffer pool                         (DFT_MON_BUFPOOL) = ON
Lock                                   (DFT_MON_LOCK) = ON
Sort                                   (DFT_MON_SORT) = ON
Statement                              (DFT_MON_STMT) = ON
Table                                 (DFT_MON_TABLE) = ON
Timestamp                         (DFT_MON_TIMESTAMP) = ON
Unit of work                            (DFT_MON_UOW) = ON

To set them, use:

db2 update dbm cfg using DFT_MON_BUFPOOL ON DFT_MON_LOCK ON DFT_MON_SORT ON DFT_MON_STMT ON DFT_MON_TABLE ON DFT_MON_UOW ON HEALTH_MON OFF

I included the setting for turning the Health Monitor off. I don’t use it. If you don’t use it, turn it off – it can cause performance problems.

Collect Performance Data

Regularly write out performance data somewhere. This can be as simple as reseting the monitor switches and writing out snapshots to text files. I haven’t had a lot of time with 9.7 yet, but I plan to use this methodology going forward(though I do need to add in something for dyn sql):

http://www.ibm.com/developerworks/data/library/techarticle/dm-1009db2monitoring1/index.html?ca=drs-

The main reason to do this is that you never get “We’re going to have a performance problem in 15 minutes” – it’s “We had a performance problem 15 minutes ago. What happened?” If you have all monitor switches on and historical performance data, then you’re much more likely to be able to answer that question. A secondary reason is for this data collection is for trend analysis and to tell if a particular spike in load is really that unusual.

What Else?

So the above is really the minimum. A full DB2 performance management strategy includes:

  1. Weekly or more frequent RUNSTATS
  2. Weekly or monthly REORGs based on the output of REORGCHK
  3. Data Pruning (which is almost crucial enough to make the minimum)
  4. All monitor switches on at all times
  5. Periodic data collection of performance data
  6. Periodic review of data collected to look for trends and problems in the making (monthly or quarterly), including SQL analysis and looking for problem queries/tables and missing indexes.
  7. Load testing to prepare for peak periods
Ember Crooks
Ember Crooks

Ember is always curious and thrives on change. She has built internationally recognized expertise in IBM Db2, spent a year working with high-volume MySQL, and is now learning Snowflake. Ember shares both posts about her core skill sets and her journey learning Snowflake.

Ember lives in Denver and work from home

Articles: 544

6 Comments

  1. Hi Ember,
    I’m using the SYSIBMADM.TBSP_UTILIZATION admin view to monitor SMS and DMS tablespaces. Since I’ve turned on the monitor switch DFT_MON_BUFPOOL, it shows all the SMS as TBSP_UTILIZATION_PERCENT = 100
    http://www-01.ibm.com/support/docview.wss?uid=swg21673872
    Is that correct? It looks confusing for SMS tablespaces as they are allowed to consume all of the space in the file system and there is plenty of space in the FS.
    Thanks.

    • SMS tablespaces will generally show as 100% utilized from the DB perspective. These must be monitored at the filesystem level and not the database level.

  2. Hi ember
    Iam having performance problem from application side that when ever they select any tab it is taking lot of time to load and submit.
    I ran the runstats and reorgs on tables then it will be ok.But after some time they are facing same issue again.
    They scheduled a job which run for every ten minutes.

    • Performance is a complicated topic that cannot be easily covered or resolved in a few messages. If you’re running runstats every 10 minutes, that is too frequent. Daily is as often as I generally go for runstats across the database, unless the data is unusually mercurial. Other jobs can certainly run that frequently or more frequently. I’d be analyzing the SQL to see if an index would help, and also taking a look at key performance indicators to see if I could identify the area the problem is in.

      • I found the cause for the slow performance is when archival logs are commiting at that time my db performance is slow.so i increased the logfilesiz parameter and also softmax parameter.
        logfilsiz :1024 to 250000
        softmax :720 to 1.
        after changing these two it was working fine but when archival logs are commiting at that time db performance is slow for only sometime.can you help me in this

        • Have you looked at the size of the log buffer through the values of log pages read and log pages written? Have you looked at monreport.dbsummary to see where the wait time or processing time is being used?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.