This article is based off the “Boost Your Speed – Index Read Efficiency” presentation for the DB2 Night Show (Episode #129) – DB2’s Got Talent competition held on Friday, March 14th at 11am EST. Progression within the competition is based solely on audience voting. If you found this useful, want to see more like this, and don’t want to see a grown man cry – please vote for Michael Krafick (and Mariana Sanchez) based on the article or on the DB2 Night Show Replay. You will have one week from the original air date to vote.
Progression depends heavily on a contestants network and influence in the community. To alleviate the “vote for my friend” mentality the viewer is required to vote for two. So watch the replay to judge yourself or vote for Mariana. I come on at 1H 6M, Mariana is just before me.
To vote, select episode 129, scroll to the bottom of the page, and fill out the survey.
* * *
We all have that one metric that we focus on to improve database performance. There are countless best practices that tell you which snapshot to take, what system table to scour, as well as general tuning practices.
My favorite metric was learned in an IDUG Technical Conference session taught by IBM Gold Consultant Scott Hayes. Matter of fact, if you google for “Index Read Efficiency” (IREF) the first two or three links are Scott Hayes’ performance tuning blog and db2commerce.
Index Read Efficiency is not a captured metric, but an implied one. This means you don’t have a system table with the IREF field. The math is simple; IREF is the result of rows read and rows returned.
IREF = ROWS READ/ROWS RETURNED
The result tells you how efficient your queries are when seeking a result. For example, does a query have to scan three rows before it finds the row it needs? Or does it have to scan a hundred rows before it finds the row it needs? The first is more efficient than the second.
The amount of time spent searching implies effectiveness of indexing for that table. Hence, Index Read Efficiency.
The SQL to generate IREF is fairly simple and can be pulled from various tables or functions that return data based on ROWS READ and ROWS RETURNED.
Here are some tables or functions that contain these fields:
- MON_GET_PKG_CACHE_STMT
(ROWS_READ, ROWS_RETURNED) - MON_CURENT_UOW
(ROWS_READ, ROWS_RETURNED) - SYSIBMADM.SNAPSTMT
(ROWS_READ, FETCH_COUNT)
For Example:
Notice the polar opposite results of my query. In one case DB2 had to read 348,691 rows to return the 48 rows I needed. In the other case, DB2 read 3 rows to get the one I needed. I obviously have SQL that I need to concentrate on.
If you are looking for another example of IREF SQL, Ember has a more robust version in her “DB2 Table Scans” entry.
What thresholds are we looking for? It varies based on type of database. According to Scott Hayes, you would watch for the following thresholds on a transaction processing database.
Transaction Processing
- IREF < 10 : Doing Well
- IREF >= 10 : Cause for Investigation
- IREF > 100 : Heading Into a Ditch
Warehousing is going to naturally run higher, however I still would be cautious above 100.
Once you started to focus in on problem areas how do you resolve them? There are really three approaches to rectifying the situation. The easiest would be to use DB2ADVIS to concoct a new index if this doesn’t have a negative affect such as slowing down overnight batch processing. Other times, it’s outside a DBA’s control and a developer may need to get involved for a total rewrite. In a worst case scenario you may be talking to a vendor about their data model and what can be done to improve it.
As a DBA I’ve always focused on the no brainers. How are my bufferpools? Am I sorting a lot? As I have matured in administration I start looking at deeper levels of tuning. Index Read Efficiency is becoming a staple in my tuning efforts.
Michael Krafick is an occasional contributor to db2commerce.com. He has been a production support DBA for over 12 years in data warehousing and highly transactional OLTP environments. He was acknowledged as a top ten session speaker for “10 Minute Triage” at the 2012 IDUG Technical Conference. Michael also has extensive experience in setting up monitoring configurations for DB2 Databases as well as preparing for high availability failover, backup, and recovery. He can be reached at “Michael.Krafick (at) icloud (dot) com”. Linked-in Profile: http://www.linkedin.com/in/michaelkrafick. Twitter: mkrafick
Mike’s blog posts include:
10 Minute Triage: Assessing Problems Quickly (Part I)
10 Minute Triage: Assessing Problems Quickly (Part II)
Now, now you two play nice … DB2 and HACMP failover
Technical Conference – It’s a skill builder, not a trip to Vegas.
Why won’t you just die?! (Cleaning DB2 Process in Memory)
Attack of the Blob: Blobs in a Transaction Processing Environment
Automatic Storage Tablespaces (AST): Compare and Contrast to DMS
DB2 v10.1 Column Masking
Great article Mike!!
Thank you!