Differences Between Db2 and MySQL

With a year of fairly extensive MySQL experience under my belt, and uncertainty as to which exciting direction my career will take me next, I wanted to take a moment to write about some of the differences between Db2 and MySQL at a fairly high level.

Some of this is actual technical differences and some of it is just differences in the best practices or what is usually done. I’ll also toss in a few PostgreSQL gems when I know them, but my knowledge there is more limited.


I wanted to share this section early on. Terminology can be confusing, and there are a couple that I found difficult to navigate at first.

What MySQL calls a database, Db2 calls a schema. In Db2, a database is a combination of files, memory areas and processes that contains multiple schemas. In MySQL, once you have a mysql command line open, you use USE <schema> to switch. In Db2 we would call this the default schema. Working with Db2, we’re much more likely to fully qualify object names with the schema. To switch schemas in db2, we use SET CURRENT SCHEMA <schema>, and to change databases, we have to issue a CONNECT statement for a different database.

What Db2 calls a latch is called a mutex in MySQL.

MySQL seems to eliminate the space between table and space and use tablespace, where Db2 mostly uses table space except for actually in commands. Probably no one notices this unless they are putting down a couple of thousand words on the differences and switching back and forth between the platforms.

Db2 can generally be thought of as two products. I cover Db2 on Linux, UNIX, and Windows (LUW). This was formerly called Mid-range or UDB. Db2 on z/OS is a product for the mainframe that is significantly different, and I don’t have experience with it.


Page Size

Each database system I’ve worked in extensively uses a page or something similar as the minimum unit of I/O. MySQL uses a 16k page by default. You can change this only at the MySQL instance level, and it is chosen at creation time. Possible page sizes for MySQL include 4K, 8K, 16K, 32K, and 64K. I haven’t seen a page size other than 16K in use. PostgreSQL is similar, in that the page size is something that is chosen at a high level for everything and is immutable once chosen without re-creating and moving everything.

Db2 has the ability to use different page sizes within the same instance, database, and schema. Page sizes are defined at the table space level, and valid options include 4k, 8k, 16k, and 32k. It is important to ensure you have buffer pools and temporary tablespaces to support each of the page sizes in use. In practice often OLTP systems use 4k pages and analytics systems generally use 32k pages.

Tablespaces and Files

In Db2, it is most common to have a small set of table spaces, and many tables in those tablespaces. I have heard of two environments in the last decade that used a table space for each table, but it is rare, and usually a hold over from the bad old days 20 years ago when a LOAD operation locked an entire table space.

MySQL can also group tables together, but the more common approach is the ‘file per table’ approach. In MySQL this allows the space for a table to be released if the table is dropped or if a large amount of data is deleted. With file per table, if an OPTIMIZE TABLE is run, the size of the file will be reduced if it can be.

For me in Db2, the top considerations for how to locate tables within table spaces lies in splitting things across buffer pools, choosing different page sizes, and spreading data across multiple table spaces to enable parallelism on backup.

Db2’s REORG will release space with the right syntax, if possible, after reorg.

Db2 table spaces can also be spread across multiple physical or logical volumes. This stripes the data and can achieve efficiencies when dealing with slower storage. Frequently, we consider disk layout carefully, and if fast storage is not available for everything, we may put certain tablespaces or the logs on the fastest disks available. Generally in MySQL, we use just one logical disk for everything, and spreading data can lead to issues.


Logging in MySQL has seemed rather complicated to me. There are three main logging locations – the Undo tablespaces/logs, the redo logs, and the binary logs, if they are enabled. Redo logs and undo logs are used for crash recovery. They are also used for multi-version concurrency control. Bin logs are used for replication and could also be used for point in time recovery. Bin logs are human readable and depending on the settings, often include the actual statements to redo operations on a database.

In Db2, we have only the transaction logs. Db2 transaction logs contain before and after versions of the row for each change made. This means they serve both undo and redo purposes. They are also used for most types of replication, and are only partially readable through an API. For the most part, the logs are binary that is not human-readable, though LOBS may be readable. There are some common problems and configurations that are important to understand about Db2 logs, though IBM is making strides with things like advanced log space management to make this easier.

Object Structure

I love learning about the internals and how a database platform is really doing things at a fairly low level. This often helps inform performance and problem investigations. MySQL (and MS SQL for that matter) stores the table data within the primary key index for the table. This means that everything is in an index, essentially. The main structural difference between the primary index and any other index is the additional columns that are stored on the leaf pages.

Db2 stores table data in different structures than indexes (though still organized into pages), and generally vaguely in insert order, but with no specific order guaranteed for most table types. We do have a concept of a clustering index in Db2. When we define a clustering index, Db2 tries to keep the table in the order of that index, but still does not guarantee order. Frequent reorgs are often a good idea to maintain an order. Usually the best clustering index is NOT a unique index, but a low-cardinality column that is frequently a part of query conditions.


Db2 has only one kind of indexes – B+ tree indexes. MySQL primarily uses B+ tree indexes, but also offers a hash index type – which can only be used for equality comparisons. PostgreSQL offers many different types of indexes. MySQL and PostgreSQL include index types for spatial and text data, while Db2 tends to rely on extenders for that kind of data.

Db2 does have some of the functionality of other index types in some features. For example, when you use Multi-Dimensional clustering, there are block indexes that are maintained by Db2. Or when using BLU, synopsis tables use data skipping to reduce the amount of data to be processed. These optimizations can be great for specific workloads, but are not implemented through index types.

Every column in an index has an order associated with it. In both MySQL and Db2, you can specify ascending or descending. MySQL doesn’t allow a combination of directions – one column in ascending order while another column is in descending order. Db2 does allow this mixture.

MySQL does not have and INCLUDE option on unique indexes to include columns that are not a part of the uniqueness constraint to allow index-only access. Db2 does offer this syntax.

Buffer Pools

MySQL and Db2 both have buffer pools, and they generally serve the same purposes. There is less configurability around them in MySQL. MySQL allows us to specify the number and size of bufferpools. In Db2, we can create multiple bufferpools with different page sizes, and assign them to specific tablespaces. This can allow us to do things like isolating buffer pool space for specific tables or sets of tables. Ideally, we separate tables that are often scanned from those that are primarily accessed by primary key or other efficient index. That said, small databases often just use the default buffer pool for everything. Db2 also has some more sophisticated prefetching algorithms to get data into the buffer pool before it is needed.

Backup and Restore

MySQL provides tooling to do a logical backup of a database – mysqldump/mysqlpump/mydumper, and there are other tools along those lines. Realistically, in the MySQL world, it is more common to use Percona XtraBackup or disk snapshots for backup and recovery. Because all of the data is on one logical disk, there’s no problem taking the snapshot at any time, though MySQL will go through crash recovery when a backup is restored.

In Db2, the primary methodology for backups is the BACKUP command, which takes a binary page-by-page image of the database and writes it to a file or several files (or to a backup management system). Snapshot backups are also possible, though they generally involve using SET WRITE SUSPEND to pause write activity for the second or two it takes to take the snapshot, and then crash recovery when the backup is restored. The snapshot methodology is often used for large databases, but often isn’t bothered with for smaller environments.

One of the key differences for Db2 is point in time recovery. With nearly every restore, we rollforward at least to the end of the backup, but generally it is trivially easy to roll forward to any point in time we have transaction log files for. In the MySQL world, while point in time recovery is possible using the binlogs, I have yet to talk to a MySQL DBA who actually designs for and does this. It is far more common to just restore a backup with no rollforward whatsoever. This does mean that in Db2 we are very careful about managing the retention of archived transaction log files.

Query Optimization

Perhaps the one area I can confidently say Db2 does better than MySQL is in query optimization.

First, the package cache. In Db2, when an access plan is calculated for a query, it is stored in the package cache. The plan can then be reused for future executions of the query, which reduces query execution by the amount of the time it takes to determine an access plan. This does require some attention to make sure when queries are submitted that we carefully designate which parameters are likely to be changed with the proper use of parameter markers. This can also have it’s own drawbacks when the data distribution is very uneven – the more general access plan may not be the best choice in those cases. Db2 also collects basic summarized statistics on queries in the package cache, which makes it generally easy to find which queries are particular problems in terms of database resources.

The MySQL query cache is not the same thing at all.

The Db2 optimizer has three decades of investment and has a number of tools to understand what it is doing. There are more rewrite options available to it, more join types, and a much heavier focus on cost-basis for choosing access plans. Keep in mind that my focus in MySQL was on 5.7.X, and there was more of a movement towards cost in 8.0.X. As with any choice an RDBMS makes, the consequence of a cost-focus is that RUNSTATS must be collected, kept up to date, and tweaked to match the workload and data model. It is much more common in MySQL to nudge the optimizer to choose a particular access methodology using keyword in the query. While this is possible in Db2, using comments and optimizer hints, it is much rarer to use it.

Locking, Isolation Levels, and Transactions

This is one of the areas I find is least understood by developers both on MySQL and Db2. Isolation levels are roughly equivalent in Db2 and MySQL. Db2 only offers a version of MVCC on the cursor stability isolation level. Depending on your isolation level, it is entirely possible for readers to block writers in Db2. This means that DBAs tend to spend more time educating developers on locking and isolation levels in Db2. Oddly, the amount of time spent actually dealing with deadlocks and other negative locking phenomena for an established system seems pretty similar to me. When starting up a newer application, Db2 may require some additional time and troubleshooting in this area.

In both Db2 and MySQL, there is relatively little a DBA can do about deadlocks or lock timeouts, other than work with the developers to help them understand where the problems lie. Ultimately the developers or vendor generally need to change their methodology to eliminate locking problems.

In Db2 we have something called a super-exclusive or Z-lock. The equivalent of this in MySQL is a metadata lock, though there are other kinds of metadata locks


In Db2, we spend time making sure our runstats are up to date, which is critical for query performance. We also automate reorgs, which are generally less frequent, but critical for performance and for disk space long-term. Each of these have a lot of options around them to be aware of and tweak.

MySQL’s optimizer in 5.7 does not use statistics very heavily and the much reduced set it does collect means that estimates based on them are generally assumed to be inaccurate. MySQL 8.0 adds a lot of functionality here that I’m not fully familiar with.

As a consultant or when working with an unfamiliar Db2 system, the first question I would ask when looking at a query performance issue is when runstats were last run. If you use automatic statistics or have a good plan around regularly updating statistics, it is easy to get out of this habit and start assuming they’re OK.

All of this adds up to the fact that there’s more regular maintenance to automate or configure in Db2.

High Availability and Disaster Recovery

The main solution for HA and DR in Db2 is HADR. It is replication based on log records with tie-ins to the transaction depending on what your durability requirements are for the standby. It is generally easy, though it requires another facility to automate failover, and also requires monitoring as it can silently fail. Generally people don’t run read traffic on the standbys due to some limitations there, and there’s no canned ability to fan read activity out to servers. There is also a hard limit on the number of standbys at three. There are other options for HA and DR including PureScale (a shared-disk solution designed for high-concurrency environments) and several forms of replication that add complexity but also flexibility.

In MySQL the primary solution is binlog-based replication. Which is interesting, because in the way that I’ve seen it implemented, this essentially logs all statements run on the primary and replays them on the standby. Which can have interesting implications and hacks. On the other hand, the MySQL ecosystem includes several options to fan reads out to the standbys, which allows for some incredible throughput for the right workload with a number of replicas. The topology choices are far more varied for this form of replication than for HADR, though Db2’s other replication methodologies can do similar things.

MySQL doesn’t have much that works reliably across version boundaries, while Db2 replication (not HADR) can work across version boundaries.


Though some parts of the ecosystem you might use around MySQL may vary in how long and how well they are actively developed, there is much more of an ecosystem. The online schema change offered by LHM or gh-ost are pretty neat. The query routing offered by proxySQL is really slick. The tools available from random contributors and more dedicated and stable sources like Percona can be very powerful.

Db2 LUW doesn’t have this ecosystem around it. There are a few providers, particularly of performance-related tools, though they tend to cost a lot. This is interesting because though the tools for Db2 on z/OS cost a lot, they certainly appear to be vibrant and very active in the db2 community.


Generally with Db2, we scale up. This means increasing the ‘size’ of the server Db2 runs on in one or more dimensions (CPU, Memory, Network speed, disk speed, etc). When something gets too big to scale up, it can move to PureScale or even to the mainframe.

With MySQL, the focus is more to scale out. This means adding more servers with their own copies of the database, or sharding the data across multiple servers. Adding more replicas to scale out for read access only can go a long ways, and a dozen or more is not unheard of. When something gets too large for that, sharding at the application level or pulling in an ecosystem tool like Vitess is a common direction to go.


Overall, it has been fascinating to me to learn the things that MySQL and Db2 do differently. Many things are simply choices with different implications, and few of those choices could you really say one way is better than another. Generally database platforms are trying to solve the same problems.

Ember Crooks
Ember Crooks

Ember is always curious and thrives on change. She has built internationally recognized expertise in IBM Db2, spent a year working with high-volume MySQL, and is now learning Snowflake. Ember shares both posts about her core skill sets and her journey learning Snowflake.

Ember lives in Denver and work from home

Articles: 557


  1. One correction here:
    “Db2 can generally be thought of as two products. I cover Db2 on Linux, UNIX, and Windows (LUW). This was formerly called Mid-range or UDB. Db2 on z/OS is a product for the mainframe that is significantly different”

    Actually, Db2 family contains also third, also distinct, product: DB2 for IBM i (former IBM i/5, former AS/400). Quite heavily integrated with quite odd operating system.

  2. Amazing post, thanks for sharing it!

    There is an important inaccuracy in the replication part. I report it in the hope of helping to improve the article, I don’t want to criticise your brilliant work.

    MySQL binary log has 3 formats: STATEMENT, ROW and MIXED. The one you described is obviously STATEMENT. The default in recent versions is ROW, which logs the row writes in binary format. This is the fastest for replication, but it also makes the binary log bigger. For this reason, some other settings allow to control whether to include some extra information or not: the old values when a row is updated or deleted, and some metadata about the columns. Logging extra information is typically required by CDC tools and not necessary otherwise. The last format, MIXED, is essentially the STATEMENT format, but when a query is not deterministic (or MySQL thinks it is not deterministic) the ROW format is used. DDLs are always logged as STATEMENT.

    MySQL has also a plugin for semi-sync replication. It works similarly, but the master reports success only after the query has been received by at least one replica.

    Another option is InnoDB Cluster, which is a synchronous, multi-master replication mechanism. MariaDB has a more mature technology called Galera for the same purpose. Percona Server has both.

    There might be more inaccuracies, but I hadn’t the time to read the article carefully. In case I find more, I might report them to you if you don’t mind.

    Thank you again for the great post.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.