Identifying Problem SQL

Many DBAs have their own methods and approaches in this area,  but there are some basic sources and methods that I use. I’ll cover at least the basics of SQL  analysis in one or more future posts

Get SQL from Developers

So the optimum situation is that your developers give you the SQL they’re planning to use in custom code (or better yet, they have a dba write it in the first place). More likely, developers will identify a problem area where “the database is slow”, and can give you the SQL from that part of their code. I also ask on any new table creation that the developers provide the SQL that they intend to use, so I can index for it ahead of time. I’ve seen significant site slowdowns caused by no indexes on a custom table.

Dynamic SQL snapshot

I use my own scripts to parse out a SQL snapshot using perl, and then I import it to Excel to play with it and order it in different ways and such. Once I’ve got it in Excel, I rank the statements based on their Rows Read, Execution time, CPU time, Sort time, and sometimes on the number of executions or the avg execution time. I then look at what’s in the top 10 in each category, and see if there are queries that stand out. Most frequently I find 1-5  bad queries that I can easily help with indexes and another 1-5 that I don’t like, but can’t index for.

Querying the admin view or snapshot table function

So this is a bit easier than using the dynamic sql snapshot – though I think the sql below only works on 9.7 because it uses the sysibmadm views. The SQL I use for this looks something like this:

db2 "select stmt_text, rows_read, rows_written, stmt_sorts, sort_overflows, num_executions, rows_read/num_executions avg_rows_per_exec from sysibmadm.snapdyn_sql order by rows_read desc fetch first 5 rows only with ur"

Ordering by different values for CPU, etc. Then assuming it’s not a one-time execution, I analyze the queries to see what I can do for them.

Deadlock event monitor

If you have deadlocks where SELECT statements are involved, it is important to analyze the SQL for them. The general approach to making this work is:

  • Make sure you have a detailed deadlock event monitor on at all times. DB2 has one on by default, but it can quickly run out of space. I usually re-create it with more space using something like this:
db2 "create event monitor emb_detaildeadlock for deadlocks with details write to file 'emb_detaildeadlock' maxfiles 2000 maxfilesize 10000 blocked append autostart"
db2 "set event monitor emb_detaildeadlock state=1"
db2 "set event monitor db2detaildeadlock state=0"
db2 drop event monitor db2detaildeadlock
  • Flush the event monitor

db2 flush event monitor emb_detaildeadlock

  • Convert the event monitor output to human-readable format, replacing with the appropriate path for your database
db2evmon -path /db_home/NODE0000/db_seq_num/db2event/emb_detaildeadlock
  • Look through or write a script to analyze the output

SQL from stored procedures

So I’ll actually cover this in a post on how to analyze SQL. You cannot usually just copy the SQL out of a stored procedure to explain it – it requires different techniques. You will see the execution of stored procedures show up in the dynamic sql snapshot, so you’ll be able to rank them with other queries to know if you need to analyze them.


Ember Crooks
Ember Crooks

Ember is always curious and thrives on change. She has built internationally recognized expertise in IBM Db2, spent a year working with high-volume MySQL, and is now learning Snowflake. Ember shares both posts about her core skill sets and her journey learning Snowflake.

Ember lives in Denver and work from home

Articles: 556


  1. […] sure isn’t too bad, or you’ll want to get problem SQL – possibly as defined in my post on that. You’ll also want to make sure runstats are current before running any of this, as it is […]

  2. […] So this was emphasized to me over Cyber Monday 2011. I have a client with their own DBA, and we worked together to identify just 4 indexes that they needed to add in October. Cyber Monday rolls around, and the indexes hadn’t made it into production yet. The site was so slow as to be unusable, the over-sized database server was high on CPU and had over 200 active connections at all times. The other DBA added these 4 indexes, and immediately the active connections were between 5 and 15 (more normal for this database), and the over-sized db server was again bored stiff. That’s just 4 indexes, not even clustering ones. So you should be constantly reviewing SQL and looking for indexes to add. You never want to drop base commerce indexes, but adding indexes is certainly reasonable. See this blog entry on analyzing SQL and this one on identifying problem SQL. […]

  3. any idea on how to filter dyn sql results between specific duration? for example, I need dyn sql executed only during business hours (8 AM to 5 PM) & I don’t want sqls executed after business hours.

    • That’s one of the reasons why it is critical to be saving historical performance data (even if only for a few days). Either in tables or in plain old snapshot files. Even if you don’t have space for every statement, the top statements in each category written to some kind of history table hourly are great to have. There are plenty of pay-for-use tools like performance expert or DBI Software’s tools that do this for you, or you can write your own scripts/stored procedures for it. If you’re writing your own scripts that write to table, a good starting point is:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.