This series started with Explain Part 1 – Explain and the DB2 Optimizer. This is the second post in the series. I want to cover my favorite method for generating Explain Plans, and in the process, discuss some of the elements. I am a fully admitted command line junkie. I do 99% of everything with DB2 databases from a command line. It does not seem odd to me to browse through data this way or write stored procedures. Explains, I nearly always do from the command line, though there have been a few times I have used Visual Explain. I’ll cover Visual Explain in a future entry in this series.
I’m not going to talk in depth in this post about what to Explain. Possible sources for SQL that you may be examining include:
- Your package cache – see my developerWorks article: Mining your package cache for problem SQL in DB2 for Linux, UNIX, and Windows
- Code reviews or SQL that developers ask you to review
- The deadlock event monitor or locking event monitor
- Stored procedures or triggers
- Statement event monitors
Dealing with Parameter Markers
Many of the queries I explain have parameter markers. This is common in an e-commerce database with well defined SQL. Some queries use hard values, or rather do not use parameter markers. For those unfamiliar with parameter markers, they are represented in SQL (including what you get from the package cache) as question marks – ‘?’. Parameter markers represent values that are supplied at execution time – the DB2 Optimizer does not know about the actual values when generating the access plan. Most methods of generating Explain Plans can handle parameter markers just fine.
One thing to understand, though, is that it is not the same to fill in values for parameter markers or to artificially place parameter marker question marks in the place of values. The Explain Plan for:
select co1, col2 from table_a where col3=100
may be very significantly different than the Explain Plan for:
select co1, col2 from table_a where col3=?
This is especially true if you use distribution statistics and have an uneven distribution of data. Just be aware of this and make sure the SQL you are using to generate Explain Plans does the same things with parameter markers as the actual SQL running against your database.
Generating the Explain Plan
I like to work from the linux(or UNIX) command line, usually logged in as the instance owner or as an id with SYSADM.
Generating the Explain Data and Writing it to the Explain Tables.
The first step in generating an Explain Plan in this environment is to put the SQL statement in a file. I also prefer to fully qualify (with schema names) the tables in my SQL while I’m doing this. Fully qualified schema names are not required – I get them ready because I’m frequently doing a db2advis on the query immediately after generating the Explain Plan. db2advis is easier to run with fully qualified table names. For information on how to run db2advis, see SQL Analysis Overview.
I’m simply calling my file query2.sql, and for this example, it contains this query:
select * from wscomusr.schstatus where scsstate = 'R' and scsjobnbr in (select sccjobrefnum from wscomusr.schconfig where sccpathinfo = 'ExperimentCheckExpiry');
Once the SQL is in a file, I connect to my database and set my explain mode like this:
$ db2 connect to SAMPLE Database Connection Information Database server = DB2/LINUXX8664 9.7.4 SQL authorization ID = DB2INST1 Local database alias = SAMPLE $ db2 set current explain mode explain DB20000I The SQL command completed successfully.
After the CURRENT EXPLAIN MODE is set to explain, every statement executed in that one session will not be executed, but instead will have explain information inserted into the explain tables. Executing a query in explain mode looks like this:
$ db2 -tvf query2.sql select * from wscomusr.schstatus where scsstate = 'R' and scsjobnbr in (select sccjobrefnum from wscomusr.schconfig where sccpathinfo = 'ExperimentCheckExpiry') SQL0217W The statement was not executed as only Explain information requests are being processed. SQLSTATE=01604
The SET CURRENT EXPLAIN MODE special register has some interesting settings. I’m only showing the use of the ‘EXPLAIN’ setting here, but there is a setting that will let you both execute and explain SQL, a setting that will let you populate the index advisor tables, even a setting that will let you evaluate recommended indexes for use as if they actually existed. Details are in the DB2 Info Center entry on CURRENT EXPLAIN MODE.
Note that this method does not generate an Explain Snapshot.
After you have captured the explain information, set your CURRENT EXPLAIN MODE back using this statement:
$ db2 set current explain mode no DB20000I The SQL command completed successfully.
It doesn’t really hurt if you forget that last part – usually I do, and when trying to do other work in the session, get the message that my statement has been explained and remember to set it back again. Remember that this is a setting for the current session only, so the moment you close that session, it is gone, and it affects no other sessions than the one you issue the statement in.
If you have never run an explain in this database before, or haven’t run one as this user before, you may get this error message:
SQL0219N The required Explain table "DB2INST1.EXPLAIN_INSTANCE" does not exist. SQLSTATE=42704
Don’t panic, this one is easy to deal with. To create or re-create the explain tables, all you have to do is to change to the INSTHOME/sqllib/misc directory and execute the right script, like this:
$ cd $INSTHOME/sqllib/misc $ db2 -tf EXPLAIN.DDL ******* IMPORTANT ********** USAGE: db2 -tf EXPLAIN.DDL ******* IMPORTANT ********** DB20000I The UPDATE COMMAND OPTIONS command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. DB20000I The SQL command completed successfully. $
There may also be other errors after an upgrade or FixPack about Explain tables, and usually the above may fix them for you as well. Some DB2 version upgrades have special instructions around migrating explain data if you keep and use data in the explain tables over time.
Generating Formatted Output From the Data in the Explain Tables
Now that the data is in the explain tables, it still has to be formatted into a nice human readable format. This is where db2exfmt comes in. The easiest way to execute db2exfmt is right after you’ve executed the query as above, simply formatting the last thing put into the Explain tables. That looks like this:
$ db2exfmt -d WC039D01 -1 -o query2.sql.exfmt DB2 Universal Database Version 9.7, 5622-044 (c) Copyright IBM Corp. 1991, 2009 Licensed Material - Program Property of IBM IBM DATABASE 2 Explain Table Format Tool Connecting to the Database. Connect to Database Successful. Output is in query2.sql.exfmt. Executing Connect Reset -- Connect Reset was Successful.
DB2 added this -1 overall option in DB2 9.5, and I love it. It basically says ‘take all the defaults and don’t bother prompting me’. It is functionally equivalent to: “-e % -n % -s % -v % -w -1 -# 0”. Which frankly used to be a pain to type or to hit enter through all the prompts.
The Explain Plan is now in the file we specified – query2.sql.exfmt
General Information in the Explain Plan
If you look at the generated file, and you’ve never looked at an explain plan before, the size might surprise you. 20 KB for this one little two table join. The entire explain plan is available here: https://datageek.blog/wp-content/uploads/2013/06/query2.sql_.exfmt_.txt. Relevant portions are quoted below.
The file starts out with some general information:
DB2 Universal Database Version 9.7, 5622-044 (c) Copyright IBM Corp. 1991, 2009 Licensed Material - Program Property of IBM IBM DATABASE 2 Explain Table Format Tool ******************** EXPLAIN INSTANCE ******************** DB2_VERSION: 09.07.4 SOURCE_NAME: SQLC2H21 SOURCE_SCHEMA: NULLID SOURCE_VERSION: EXPLAIN_TIME: 2013-06-05-22.214.171.124999 EXPLAIN_REQUESTER: DB2INST1 Database Context: ---------------- Parallelism: None CPU Speed: 3.306410e-07 Comm Speed: 100 Buffer Pool size: 138280 Sort Heap size: 2500 Database Heap size: 2390 Lock List size: 29472 Maximum Lock List: 98 Average Applications: 1 Locks Available: 924241 Package Context: --------------- SQL Type: Dynamic Optimization Level: 5 Blocking: Block All Cursors Isolation Level: Cursor Stability ---------------- STATEMENT 1 SECTION 201 ---------------- QUERYNO: 4 QUERYTAG: CLP Statement Type: Select Updatable: No Deletable: No Query Degree: 1
This general information includes things like the optimization level that we discussed in the first article in this series, and settings for relevant system parameters. These items might be very relevant if you’re comparing performance on different settings, but many times, I skip past them.
The Re-Written Query
This section is interesting, though not often of practical value. It shows us the query the explain plan is for and the re-written, internal represenation of the query:
Original Statement: ------------------ select * from wscomusr.schstatus where scsstate = 'R' and scsjobnbr in (select sccjobrefnum from wscomusr.schconfig where sccpathinfo = 'ExperimentCheckExpiry') Optimized Statement: ------------------- SELECT Q2.SCSINSTREFNUM AS "SCSINSTREFNUM", Q2.SCSJOBNBR AS "SCSJOBNBR", 'R ' AS "SCSSTATE", Q2.SCSPREFSTART AS "SCSPREFSTART", Q2.SCSACTLSTART AS "SCSACTLSTART", Q2.SCSRESULT AS "SCSRESULT", Q2.SCSQUEUE AS "SCSQUEUE", Q2.SCSEND AS "SCSEND", Q2.SCSATTLEFT AS "SCSATTLEFT", Q2.SCSSEQUENCE AS "SCSSEQUENCE", Q2.SCSINSTRECOV AS "SCSINSTRECOV", Q2.OPTCOUNTER AS "OPTCOUNTER" FROM WSCOMUSR.SCHCONFIG AS Q1, WSCOMUSR.SCHSTATUS AS Q2 WHERE (Q1.SCCPATHINFO = 'ExperimentCheckExpiry') AND (Q2.SCSJOBNBR = Q1.SCCJOBREFNUM) AND (Q2.SCSSTATE = 'R ')
The “Optimized Statement” is not directly executable, but it can be interesting to see how DB2 re-writes things. When I’m comparing two different queries that return the same exact data, I often look at this to see if DB2 really is doing anything different internally to execute the two. Sometimes two very different looking queries come to the same or very similar optimized statements.
The next section is frequently the one I look at first and the one I actually spend the most time with. The graph is a visual representation of the objects and operators that are used in retrieving the data for the query. It is a pretty compact representation, but is excellent at helping you find some problem things like table scans, or helping you narrow down where the bulk of time or I/O is coming from.
Access Plan: ----------- Total Cost: 71.3646 Query Degree: 1 Rows RETURN ( 1) Cost I/O | 6.64478e-05 NLJOIN ( 2) 71.3646 42.3992 /----------+----------\ 1.13333 5.86304e-05 FETCH FETCH ( 3) ( 5) 15.1468 47.1541 2 33.8817 /---+----\ /---+----\ 1.13333 17056 21.1132 360107 IXSCAN TABLE: WSCOMUSR RIDSCN TABLE: WSCOMUSR ( 4) SCHCONFIG ( 6) SCHSTATUS 7.58191 Q1 15.1594 Q2 1 2 | | 17056 21.1132 INDEX: WSCOMUSR SORT I0000321 ( 7) Q1 15.159 2 | 21.1132 IXSCAN ( 8) 15.1559 2 | 360107 INDEX: WSCOMUSR I0000762 Q2
The first thing I look at is the “Total Cost”. This is the value in timerons(defined in the first post in this series) of this query. While we cannot convert timerons to a real measure of time, most DBAs have general ranges in their heads of what is quick for their systems and what is long. I think of 15 timerons as pretty quick, and over 30,000 timerons as potentially problematic. Over 100,000 timerons is “Oh, my”. But that’s for my e-commerce systems, and there are certainly times when I would be worried about a query that took only 5 timerons and just fine with one that took 100,000. Timerons alone are not an indication of a problem, just a piece of information.
The key to deciphering the numbers in the Access Plan Graph are always right there as the last/top object in the query graph – the final return. It looks like this:
Rows RETURN ( 1) Cost I/O
That is a the pattern for all of the objects and operators below. Each one starts out with the cardinality estimate on the number of rows it expects to be working with. ‘RETURN’ represents the type/name of the operator or object – things like ‘SORT’, ‘FETCH’, ‘INDEX: NAME’, or ‘Table: NAME’. The number in parenthesis is the identifier for this operator, and can be directly mapped to the details we’ll talk about in the next section. Cost is the value in timerons at each step. I/O is the number of I/Os expected for that step.
The query graph is read from the bottom to the top and the right to the left. The operators with the largest numbers are done first, so in this case, you can see operator 8 (an IXSCAN) on the lower right is the first thing done.
Also the values for timerons are cumulative. So the value for timerons in operator 5 of 47.1541 includes the values from operators 8, 7, and 6.
Notice that the objects – the tables and the indexes that the data originally comes from – have a cardinality estimate from RUNSTATS data, but do not have a timeron value – the timerons are in the operators only, and just because the number of rows in the table or index is listed does not mean DB2 is reading all of those rows. The cardinality estimate of the operator accessing those objects will tell you how many rows DB2 actually expects to read.
Plan Details – Operators
Below the “Access Plan” or query graph, there is a section called “Plan Details”. While the query graph is excellent for finding where a problem might be and getting an overall impression of the flow of data, the Plan Details section gives an amazing level of detail of each step in the plan. The identifier(number in parenthesis from the query graph) maps to each section of the Plan Details section. Once we have identified an area where we want do dig deeper, we can use this identifier to see exactly what is happening in that section.
The data available depends on which type of operator the details are about. In this example, if we’re looking at item #2 – the Nested Loop Join, we can see the following:
2) NLJOIN: (Nested Loop Join) Cumulative Total Cost: 71.3646 Cumulative CPU Cost: 496967 Cumulative I/O Cost: 42.3992 Cumulative Re-Total Cost: 0.0821479 Cumulative Re-CPU Cost: 248450 Cumulative Re-I/O Cost: 0 Cumulative First Row Cost: 71.3646 Estimated Bufferpool Buffers: 40.4496 Arguments: --------- EARLYOUT: (Early Out flag) NONE FETCHMAX: (Override for FETCH MAXPAGES) IGNORE ISCANMAX: (Override for ISCAN MAXPAGES) IGNORE Predicates: ---------- 3) Predicate used in Join, Comparison Operator: Equal (=) Subquery Input Required: No Filter Factor: 5.86304e-05 Predicate Text: -------------- (Q2.SCSJOBNBR = Q1.SCCJOBREFNUM) Input Streams: ------------- 4) From Operator #3 Estimated number of rows: 1.13333 Number of columns: 1 Subquery predicate ID: Not Applicable Column Names: ------------ +Q1.SCCJOBREFNUM 10) From Operator #5 Estimated number of rows: 5.86304e-05 Number of columns: 11 Subquery predicate ID: Not Applicable Column Names: ------------ +Q2.OPTCOUNTER+Q2.SCSINSTRECOV+Q2.SCSSEQUENCE +Q2.SCSATTLEFT+Q2.SCSEND+Q2.SCSQUEUE +Q2.SCSRESULT+Q2.SCSACTLSTART+Q2.SCSPREFSTART +Q2.SCSINSTREFNUM+Q2.SCSJOBNBR Output Streams: -------------- 11) To Operator #1 Estimated number of rows: 6.64478e-05 Number of columns: 12 Subquery predicate ID: Not Applicable Column Names: ------------ +Q3.OPTCOUNTER+Q3.SCSINSTRECOV+Q3.SCSSEQUENCE +Q3.SCSATTLEFT+Q3.SCSEND+Q3.SCSQUEUE +Q3.SCSRESULT+Q3.SCSACTLSTART+Q3.SCSPREFSTART +Q3.SCSSTATE+Q3.SCSJOBNBR+Q3.SCSINSTREFNUM
The “cumulative” costs at the top are what is displayed in the query graph, along with a few others to give us an idea of costs to this point in satisfying the query.
What kind of information is available in the “Arguments” section varies significantly depending on the type of operator, but there can be valuable information there. In this case, there are only three items represented there.
The Predicates section lists what predicates are being applied at this step – in this case, the join condition for joining the two input streams. It also tells us the “filter factor” – which is really how DB2 estimates the number of rows it expects to be returned from this particular predicate. In this case, DB2 expects that 0.00586% of rows will match.
The Input Streams section tells us where this operator is getting data from – it can be directly from objects, or it can also be from the results of other processing for this query, as it is in this case. In describing the input streams, DB2 tells us how many rows it expects to be handling, along with which columns those rows are composed of.
Finally, the Output Streams section defines what is passed on to the next operator in the query plan – both the number of rows expected and the columns that comprise those rows.
There is an incredible amount of data here. Don’t expect to be able to understand every single bit of it. There are details here that some DBAs may never need to know. But it is still an area I return to when I have some time to try to better understand something that DB2 is trying to tell me here. I plan on a additional post in this series just to cover join types and some of the details of them.
Details on Objects
At the very bottom of the explain output, you’ll see a section on the objects accessed by the query. In part, it looks like this for our example:
Objects Used in Access Plan: --------------------------- Schema: WSCOMUSR Name: I0000321 Type: Index Time of creation: 2012-12-04-126.96.36.199190 Last statistics update: 2013-06-01-00.01.14.990000 Number of columns: 1 Number of rows: 17056 Width of rows: -1 Number of buffer pool pages: 617 Distinct row values: No Tablespace name: USERSPACE1 Tablespace overhead: 7.500000 Tablespace transfer rate: 0.060000 Source for statistics: Single Node Prefetch page count: 192 Container extent page count: 32 Index clustering statistic: 0.989902 Index leaf pages: 43 Index tree levels: 2 Index full key cardinality: 40 Index first key cardinality: 40 Index first 2 keys cardinality: -1 Index first 3 keys cardinality: -1 Index first 4 keys cardinality: -1 Index sequential pages: 36 Index page density: 65 Index avg sequential pages: 36 Index avg gap between sequences:0 Index avg random pages: 6 Fetch avg sequential pages: -1 Fetch avg gap between sequences:-1 Fetch avg random pages: -1 Index RID count: 17075 Index deleted RID count: 19 Index empty leaf pages: 0 Base Table Schema: WSCOMUSR Base Table Name: SCHCONFIG Columns in index: SCCPATHINFO(A)
Schema: WSCOMUSR Name: SCHCONFIG Type: Table Time of creation: 2012-12-04-188.8.131.52578 Last statistics update: 2013-06-01-00.01.14.990000 Number of columns: 16 Number of rows: 17056 Width of rows: 65 Number of buffer pool pages: 617 Number of data partitions: 1 Distinct row values: No Tablespace name: USERSPACE1 Tablespace overhead: 7.500000 Tablespace transfer rate: 0.060000 Source for statistics: Single Node Prefetch page count: 192 Container extent page count: 32 Table overflow record count: 0 Table Active Blocks: -1 Average Row Compression Ratio: 0 Percentage Rows Compressed: 0 Average Compressed Row Size: 0
The above is for one of the indexes and one of the tables in this explain plan. Again, there is a lot of data here. The data in this section is independent of the query we’re running – it would be the same data shown for completely different queries. It shows us things like statistical data about the tables and indexes – number of rows, width of rows, how much space the entire object would take up in the buffer pool, etc. If our runstats were not current, the data here might not be fully accurate.
Operators Which Appear in Access Plans
The list of operators which can appear in access plans is actually really long. Below are the ones DBAs supporting single partition datbases without much need to deal with XML should be at least generally familiar with. The complete list is available in the DB2 Information Center.
- DELETE – represents the deletion of rows from a table.
- FETCH – represents the fetching of columns/rows from a table.
- FILTER – represents how data is filtered.
- GENROW – used by the optimizer to generate rows of data. See my post on Fun with IN-lists and GENROW
- HSJOIN represents hash joins for which the qualified rows from tables are hashed.
- INSERT – represents the insertion of rows into a table.
- IXAND represents the ANDing of the results of multiple index scans.
- IXSCAN – represents just about any kind of access to an index.
- MSJOIN – represents a merge join.
- NLJOIN – represents a nested loop join.
- RETURN – represents the return of data from a query.
- RIDSCN – represents the scan of a list of row identifiers (RIDs).
- SORT – represents the sorting of rows in a table.
- TBSCAN – represents table scans.
- TEMP – represents the storage of data in a temporary table.
- UPDATE – represents the updating of data in the rows of a table.
One quick note on operators – every access to an Index is reported as an IXSCAN, whether it is truly a scan or not. This can be a bit frustrating when trying to optimize index access, but is good to know when you’re digging down to that level.
db2exfmt is not hard to use, and really provides more data than we could actually use for most queries. Understanding how to do an explain and the basics of reading an explain plan is a core skill for DB2 DBAs, but is one of those areas that takes only a moment to learn, but a lifetime to master.
SlideShare from a brilliant source
Advanced DBA Certification Guide and Reference for DB2 Universal Database v8 for Linux, UNIX, and Windows by Dwaine Snow and Thomas Xuan Phan – chapter 6
IBM DB2 Information Center page on db2exfmt
For parameter markers and temporary table scenarios (e.g. DGTTs and CGTTs), you can consider using the new explain from section functionality introduced in DB2 9.7… this also introduces the ability to get section actuals in the explain information.
I hope you get a chance to cover this in a future post! 🙂
Thanks for the good reading ,
That does look interesting. http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.admin.perf.doc/doc/c0056389.html
Nice example Ember.
When I read you query terms like “NLJOIN” and “I’d code a join” flashed thru my head. Later on you see that the optimizer had the same thoughts and re-wrote it te be a join.
Good reading, keep it up!
Ember, I love you! I am a relatively new DB2 LUW DBA and you are making me a better DBA every week. Your blog is fantastic and more helpful than I can explain. I have gained a tremendous amount of value from so many of your posts. You should take great pride and satisfaction knowing how your hard work is improving the DB2 community. Keep up the good work and thank you for making me better and more satisfied with my job.
Well, thanks! I always love to hear that the stuff I’m writing is helping people!
Ember this is one of the awesome st blogs in db2, u experience reflects in the way that you the things .. Thanks for helping around world..
Thank you so much. Was so addicted to control center. But this worked just fine.
Hi..Loved ur article..really helpful.
I tried generating explain plan..i got error SQL0035N the file “db2exfmt.msg” cannot be opened.
Can you please help me this issue.
Check out this technote- it might help you: http://www-01.ibm.com/support/docview.wss?uid=swg21614629
I read ur article on binding as well and it solved my issue completely.
It’s realy useful to me :). thanks somuch
Hi very nice . But my requirement to run many select statement need to explain in single execution. Please any one help on this
Hi, Ember 🙂
I’m sort of an accidental DB2 DBA, and I’ve learned so much from what you’re doing, here. Much obliged.
Quick question: It seems like the parallelism value under the DatabaseContext is always set to “none” in db2exfmt. Do I have to capture an actual plan to get the parallelism? Or am I misinterpreting something, perhaps?
Thanks again. 🙂
Do you have parallelism enabled via intra_parallel and possibly setting the dft degree? Are they queries with larger result sets that would benefit from parallelism?
Research settings and drawbacks before enabling, if it’s not enabled.
Yeah – It’s a huge query. Runs for hours.
dft_degree is set to ANY.
intra_parallel is set to “NO”, which I suppose should be changed. However, we really don’t have a partitioned database, so I’m not sure how important that is?
You will not see any parallelism if INTRA_PARALLEL is set to NO. See this blog entry for some of the gotchas on that; Parameter Wednesday: DBM CFG – INTRA_PARALLEL. Parallelism is not good for all queries.
Thanks so much for your time in this!
It’s helped me a lot.
I’ll check out the link.
Thanks for the blog.
I have one query.
Do we have optimized way to perform self join?
“It Depends”. I like to do self-joins as common table expressions. I’m not sure as to the performance comparisons of other methods.
We are executing an SQL statement with db2advis but it is returning the error shown below.
Explain tables not set up properly for schema
The insert into the ADVISE_INSTANCE table has failed.
0 solutions were evaluated by the advisor
exiting with error code [-219]
I had rerun the explain.ddl but i got same error. Please suggest me on this.
I would suggest dropping and recreating the explain tables using SYSINSTALLOBJECTS. See Keeping EXPLAIN Under Control.
Thanks for support. i am fresher DB2 DBA. After reading explain.Please How can analyze problem and give their solution.?
Please help me
I could literally write a book on that.
As Ember says, that’s a big topic. You can, however, come at learning that topic in a methodical and problem-focused way. In the first node of the explain plan, you’ll find a sort of key (like a map-key).
It tells you that:
– the top # is rows retruned.
– the next thing will be text that tells you the operation performed.
– the next # is just the identifier assigned by db2 so you can explore further in the txt portion of the output.
– The next # is the overall COST
– The next * is the I/O
I like to go through this and look for the biggest overall cost, and then see what I can see from there.
You can research the operation to understand more about what it is doing and how to tune it.
This reference might help: https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0ahUKEwiYt4rCpKrXAhVIlJAKHbORA-sQFggoMAA&url=https%3A%2F%2Fwww.ibm.com%2Fdeveloperworks%2Fcommunity%2Fwikis%2Fform%2Fanonymous%2Fapi%2Fwiki%2Fd0965fc1-c587-474c-a1db-a3b6c10246a5%2Fpage%2F45963997-244c-4555-a9f1-30de986867e6%2Fattachment%2F1530225e-e643-4c39-a25e-29b3caa5b8f5%2Fmedia%2Fexplains_V1.2.pdf&usg=AOvVaw0ippSFpoap-0Zc4OjBPks3
But, for tuning a query, there MIGHT be a shortcut. Often, making sure any arguments in your where clause are indexed will have a dramatic impact. From there, you get more into using judgement, but sometimes you can also create a covering index (all columns in the query are in an index) etc.
I’m looking your help to provide steps on optimize the customer query in best way, Let me get your links if possible
I, and many DBAs, could write a book on this. You’ll find a number of helpful articles in my SQL category.
Thank you, Ember. Great article.