What is a Backup?
A backup is a binary image of the entire Db2 database. It is therefore the size of the entire Db2 database unless you compress it. It also involves reading every bit in the database, and writing every bit in the database out to a file or backup image.
It is not a “database dump” which may contain all the SQL to re-create the database and re-populate it. That is far too slow – the binary image is much faster both to backup and to restore than that would be. Backups are used for a number of things, but are primarily used for disaster recovery or for data movement. Please consider your restore goals before relying on a backup to meet them. A full HA and DR strategy should be developed.
A backup/restore is nearly always the fastest way to get a whole database from one place to another, especially without much pre-planning.
Read more about database backups in my DB2 Basics: Backups of Data and Configuration blog entry.
The Rules of Db2 Database Backups
- A database backup cannot be restored across database vendors.
- A database backup cannot be restored across operating system families. The OS families are:
- Big-endian Linux and Unix – such as AIX or Linux on Z-series
- Little-endian Linux and Unix – includes most flavors of Linux
If you want to restore across families, you need to use another method such as db2move.
- A database backup can usually be restored on a newer version of Db2, as long as that version is not more than two versions above the version it was taken on (there may be exceptions to this rule).
- A database backup cannot be restored on an older version of Db2.
- Rollforward is not supported on a different version than where the backup was taken. This means the only backups that work for cross-version recovery are offline backups.
- The smallest unit of recovery is a table space. Tables may only be recovered if they are isolated in a table space and have no dependencies. This rule can be re-visited if you buy an IBM tool called recovery expert.
- You don’t have a recovery strategy unless you’ve tested it.
A Useful Backup Utility
Sometimes the validity of a particular backup image really matters. Sometimes you have a backup image, but do not know any of it’s characteristics. There is a tool for this, called
Backup Image Verification
This tool is most useful verify a backup image. Verification may run as long as the backup image took. The syntax for that looks like:
An example of this running looks like this:
PS C:\> db2ckbkp SAMPLE.0.DB2.DBPART000.20190318184324.001  Buffers processed: ####### Image Verification Complete - successful.
There are more pound signs for a larger database, and again, this reads and verifies the entire backup image, so it can take quite a while. When you are really depending on a particular database backup image is when you want to run this command. I know DBAs who do it for every backup they take. I do not.
Understanding a Backup Image
Maybe this is a problem particular to consultants, but I do sometimes have a backup image handed to me and know nothing about it. Sometimes not even the version of Db2 it was created on. Some of the details can prevent me from being able to do a restore, and others will change my syntax. To extract the details of a backup image, we can use the
-H option on db2ckbkp, which dumps headder information. The syntax to do that is:
db2ckbkp -H image_file_name
Notice the H here is capital – a lower case one means something different. Running that looks like this:
PS C:\> db2ckbkp -H SAMPLE.0.DB2.DBPART000.20190318184324.001 ===================== MEDIA HEADER REACHED: ===================== Server Database Name -- SAMPLE Server Database Alias -- SAMPLE Client Database Alias -- SAMPLE Timestamp -- 20190318184324 Database Partition Number -- 0 Instance -- DB2 Database Configuration Type -- 0 (Non-shared data) Sequence Number -- 1 Database Member ID -- 0 Release ID -- 0x1400 (DB2 v220.127.116.11) AL version -- V:11 R:1 M:3 F:3 I:0 SB:0 Database Seed -- 0x5B5A7C20 DB Comment's Codepage (Volume) -- 0 DB Comment (Volume) -- DB Comment's Codepage (System) -- 0 DB Comment (System) -- Authentication Value -- 255 (Not specified) Backup Mode -- 0 (Offline) Includes Logs -- 0 (No) Compression -- 1 (Compressed) Backup Type -- 0 (Database-level) Backup Granularity -- 0 (Non-incremental) Merged Backup Image -- 0 (No) Status Flags -- 0x1 Consistent on this member System Catalogs in this image -- 1 (Yes) Catalog Partition Number -- 0 DB Codeset -- UTF-8 DB Territory -- US LogID -- 1529506400 LogPath -- C:\DB2\NODE0000\SQL00002\LOGSTREAM0000\ Backup Buffer Size -- 4198400 (1025 4K pages) Number of Sessions -- 1 Platform -- 0x17 (NT-64) Encrypt Info Flags -- 0x0 The proper image file name would be: SAMPLE.0.DB2.DBPART000.20190318184324.001
This header gives me all the information I need to know if I can restore a backup to a specific target and lets me craft a sane restore command.
Advanced Backup options
There are a whole lot more options you can use on the backup command. Generally, you do not want to specify those related to performance – particularly
WITH N BUFFERS, and
BUFFER N. Db2 will automatically select values for these that are good the vast majority of the time. Another one to be careful with unless you know what you are doing is
UTIL_IMPACT_PRIORITY – unless you know what you are doing, it probably won’t do anything anyway, but its goal is to make backup take fewer system resources and therefore run slower. The other thing is, I have a hard time imaging a world where
EXCLUDE LOGS is anything other than a recipe for disaster. DBAs rejoiced when the default behavior changed to not be
EXCLUDE LOGS any more.
There are plenty of other options on backup that are useful for specific cases. Be sure to thoroughly understand them before using them. Many may be confused by the differences between
INCREMENTAL – when using either be sure to test a multi-image restore before relying on them.
Does an Online Backup Hurt Database Performance?
What kind of impact an online database backup has on performance depends mainly on the system and on the other activity on that system. It is not unheard of for a backup to consume one core. Backups can parallelize and use more than one cpu, but only by table space. Backups are intense on I/O because they are reading and writing the entire database. All of this goes through memory buffers. If a system is seriously constrained in any of these areas, it will impact the running workload. Generally, you probably don’t want to run an online database backup during your peak workload, but if the system is sized with a fair amount of overhead, users may notice no difference.There are ways to limit the amount of system resources a backup uses using the
UTIL_IMPACT_PRIORITY keyword on the
BACKUP command and the
UTIL_IMPACT_LIM database manager configuration parameter.
Utilities That Are Incompatible with Online Backup
Some utilities are not compatible with Online Backup. I generally say that the main ones are LOAD and REORG, but not every LOAD and REORG will conflict. Check out the IBM Knowledge Center page on utility compatibility with online backup to understand the details.
Things that Slow Backups Down
Anything that consumes a resource that backup needs can slow it down. This means that if backup is fighting for I/O, memory, or CPU resources, it may be slower. Backing up across a slow network connection is a sure way to get a long-running backup. High activity on the database being backed up will also make a database backup slower.
Compression slows backups down. If you have space for it, an uncompressed backup will run faster than a compressed one.
Encryption also slows backups down. An encrypted database backup will run slower than an unencrypted one.
Limiting the impact of backup using the
UTIL_IMPACT_PRIORITY keyword on the
BACKUP command and the
UTIL_IMPACT_LIM database manager configuration parameter will make backups run slower.
A utility heap size that is too small will also make backups run slower. The UTIL_HEAP_SZ database configuration parameter controls the size of the utility heap. It can be set to automatic, which does not make it part of the memory that STMM tunes, but means that Db2 can allow the utility to use database memory overflow if a larger size is needed.
Every backup writes a BAR record to the db2 diagnostic log. This contains detailed information so you can pin down any backup speed problems to the read or the write side and figure out how you can make backup faster. Full analysis of that is beyond the scope of this article. Dale McInnis wrote a wonderful article on interpreting BAR stats that is well worth the read if you want to learn more.