Explain Part 1 – Explain and the DB2 Optimizer

Generating an Explain Plan is easy…
Properly reading an Explain Plan is harder…
Taking appropriate action is harder still.

When I started my career as a physical DBA with IBM Global Services, generating Explain Plans or even general analysis of SQL was not really a part of my job description. As I learned more about performance tuning and the limitiations of what can be done without adressing SQL in any way, I MADE it part of my job description, offering suggestions on idexing and other details to developers and development DBAs. Looking at SQL is something that can actually span different DBA job roles.

Overall, Explains are a long and complicated topic. To start, I thought I’d address some of the basics and will lengthen the series as needed. I plan two more posts in this series, one addressing visual explain, and one addressing command-line explains using db2exfmt.

The DB2 Optimizer

The DB2 Optimizer’s job is to determine the best access path to get to data in a database based on a Data Manipulation Language (DML) SQL statement. The DB2 Optimizer is a cost-based optimizer, which means it calculates the cost of multiple access paths (in a unit called Timerons), and then chooses the cheapest access path. One of the advantages of the SQL language is that it means we don’t have to know where a particular piece of data lives on disk or memory. The DB2 Optimizer handles all the gory details.

The calculated cost relies on factors such as:

  • Configuration parameters on a system which control how resources are used
  • Database design – tablespaces, indexes, MQT’s etc
  • Statistical data about the data
  • DFT_QUERYOPT database configuration parameter or CURRENT QUERY OPTIMIZATION special register

What is a Timeron?

A timeron is a unit of measure for time that CANNOT be converted to a real unit of time. A timeron’s actual duration depends on the underlying hardware. Supposedly on on some system at some point in history it was roughly equivalent to a millisecond, but that’s just a rumor, and there is no real correlation on today’s hardware. If you want real times, you can use a tool like db2batch, but remember to remember to account for performance considerations like speed when accessing disk compared to bufferpool.

What is a timeron good for if we cannot use it for actual time calculation? It is used for comparing the relative performance of the same query (or queries) as changes are made When we look at Explain Plans in future entries in this series, you’ll see where to look at this value.

Statistics

You may have heard that current statistics are critical to DB2 performance. The reason is that the DB2 Optimizer relies on statistics when deciding which access path will be the least costly. This is also why I strongly recommend distribution statistics if you have dynamic SQL that does not use parameter markers. If you do not collect distribution statistics, there are a number of times where DB2 will assume a “Normal” or uniform distribution of values. In an e-commerce database, there are a large number of things that are not evenly distributed. Does an electronics retailer really sell as many tablet PC’s on Black Friday as any other day of the year? Does a clothing retailer really sell as many snow boots in July as in November?

If statistics are not correct, then DB2’s cardinality estimates for data expected at various points in processing will be off. There can be other reasons that cardinality estimates are off too. You can identify issues with cardinality estimates at all points of your access plan using db2caem. I have a good post on that – db2caem – Get Actual Row Counts in Your Explain Plans. If you’re having problems in this area, you may be able to use statistical views or even optimizer profiles to help the situation.

Optimization Levels

The DFT_QUERYOPT database configuration parameter is something that I have rarely seen changed. Yet in understanding it, we can think about some of the things the DB2 Optimizer does. And there are scenarios where you would change this or the CURRENT QUERY OPTIMIZATION special register. A lower number means less optimization (and therefore less optimization time, and likely more query execution time), while a higher number means more optimization (and therefore more optimization time and possibly less query execution time). Theoretically in a pure OLTP database, the queries should be small and simple and therefore require less optimization to get a decent access plan. Theoretically in a pure Data Warehousing database, the queries are much more complex and take longer, and the total amount of execution time that optimization represents is much smaller, so more optimization time is worth it. Unless your infrastructure and IT systems are truly huge, you’re unlikely to have databases that really only serve one purpose. Most databases that I support within e-commerce fall in the middle – skewed towards OLTP, but with a reporting workload as well, off hours if nothing else.

The default level 5, and it works well for many databases. To paraphrase the Info Center descriptions of each level:

  • 0 – minimal optimization, suitable for simple dynamic SQL access to well indexed tables.
  • 1 – Optimization roughly equal to DB2 Version 1
  • 2 – Higher than 1, but still inadequate for complex queries
  • 3 – Moderate amount of query optimization
  • 5 – Significant optimization including the consideration of MQTs. Heuristic rules used to limit optimization time.
  • 7 – Significant optimization – like 5, but without the heuristic rules
  • 9 – Maximal optimization for very complex queries on very large tables. May not be appropriate as a default level for everyday use.

DB2_REDUCED_OPTIMIZATION

I commonly see the DB2 registry parameter DB2_REDUCED_OPTIMIZATION set. In fact, the special workload setting for WebSphere commerce includes this:

[i] DB2_REDUCED_OPTIMIZATION=INDEX,UNIQUEINDEX,JOIN,NO_SORT_MGJOIN,JULIE [DB2_WORKLOAD]

Now some of those settings are not documented in the Info Center page that covers DB2_REDUCED_OPTIMIZATION, so my theory is that IBM uses this as a place to customize optimization for specific vendors or database types. This may even be one of those tricky Registry parameters where IBM tries out new functionality before incorporating it as the default. What the heck is “JULIE”, anyway?

In any case, if you’re working through optimizer issues, you may want to consider setting this before going with a fully different optimization level.

Version and Fix Pack Changes

I fully believe that the optimizer team is one of the most active in DB2 development. There are frequent changes in this area. I’ve heard of more than one case where a critical query was running particularly fast (or slow) before an upgrade or migration, and after the upgrade or migration it ran slower (or faster). When Explain Plans were analyzed, a different access path was being used. So, this is something to look at if you are having performance problems after an upgrade or migration.

Query Compilation Phases

The IBM DB2 Info Center tells us the phases that the query compiler goes through.

  1. Parse Query The compiler verifies the syntax of the query first. If it finds an error, it returns that error to the user or application, and stops processing. If it is successful, it stores an internal representation of the query in the query graph model.
  2. Check Semantics The compiler checks the query for consistency – making sure things like data types match up. It also adds ‘behavioral semantics’ to the query graph plan. Behavioral Semantics includes the effects such things as:
    • Referential constraints
    • Check constraints
    • Triggers
    • Views
  3. Rewrite Query The compiler transforms the query into a form that can be optimized more easily. It stores this in the query graph model, and we can see this form using Explain. The compiler might perform general predicate pushdown to move the level at which a predicate is applied to improve query performance.
  4. Pushdown Analysis For federated databases only, the compiler analyzes whether an operation can be ‘pushed down’ to occur at a data source.
  5. Optimize Access Plan This is the portion of the process that I have been referring to as the DB2 Optimizer. In this phase, the optimizer portion of the compiler uses all factors discussed earlier in this article to generate multiple possible access plans and chose the one that is estimated to be the least expensive. Out of this step comes an access plan that we can capture with Explain.
  6. Remote SQL Generation For federated databases only, the compiler generates efficient SQL to execute at each data source based on the specific dialects of SQL at various sources
  7. Generate Executable Code In the very last step, the compiler uses the query graph model and the access plan to create an executable section for the query. In this phase, the compiler also avoids repetition of expressions that need to be computed only once such as host variables or code page conversions.

What is Explain?

I have mentioned Explain, the Explain Facility, or Explain Plans several times already. Most simply put, an Explain Plan lets us see how DB2 plans to get the results that our DML SQL asked for. It shows in every bit of data, every object, every join, every sort in excruciating detail . It contains everything that is needed to retrieve and return the data.

In the next two posts in this series, I’ll show you the two primary methods of generating an Explain Plan and some details on how to read Explain Plans.

Ember Crooks
Ember Crooks

Ember is always curious and thrives on change. She has built internationally recognized expertise in IBM Db2, spent a year working with high-volume MySQL, and is now learning Snowflake. Ember shares both posts about her core skill sets and her journey learning Snowflake.

Ember lives in Denver and work from home

Articles: 549

8 Comments

  1. I thoroughly enjoyed this and your coming posts about EXPLAIN. I am a DB2 z/OS DBA. I am picking up more responsibility for LUW though. It’s nice to have this information presented in plain english. Thanks for all that you do!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.