DB2 Basics: What Filesystems does my Database Actively Use?

Updated 11/22/2017: One-letter typo

If you only ever take the defaults, DB2 places all databases, data, and so forth on one filesystem – whatever filesystem is the home filesystem for the DB2 instance owner. If that is an isolated filesystem, this can be acceptable, but more often it is not, and this is a bad design. See DB2 Basics: Filesystems for DB2 on Unix and Linux Systems for more information on filesystems from that perspective.

This post is desgined to help explore an existing database to understand what directories and filesystems are in use. This is specific to Linux and UNIX, though some portions would give useful information on Windows as well.

Instance Home

As any user who has sourced the DB2 profile, this command will find the instance home directory:

$ echo $INSTHOME

On AIX, to find what filesystem that corresponds to, you can use:

 df -g $INSTHOME | tail -1 | awk '{print $7}'

On Linux, the same command would be:

$ df -h $INSTHOME|tail -1|awk '{print $6}'

Active Paths For Each Database

After the instance home directory, there may be many paths associated with just one database. To see the paths associated with a database, you must connect to each database, and run a query:

$ db2 connect to sample

   Database Connection Information

 Database server        = DB2/AIX64 10.5.7
 SQL authorization ID   = DB2INST1
 Local database alias   = SAMPLE

$ db2 "select dbpartitionnum
, substr(type,1,22) as type
, substr(path,1,80) as path 
from sysibmadm.dbpaths"

-------------- ---------------------- --------------------------------------------------------------------------------
             0 LOGPATH                /db2fs/backups/NODE0000/LOGSTREAM0000/
             0 DB_STORAGE_PATH        /db2fs/temp/
             0 DB_STORAGE_PATH        /db2data4/
             0 DB_STORAGE_PATH        /db2data3/
             0 DB_STORAGE_PATH        /db2data2/
             0 DB_STORAGE_PATH        /db2data1/
             0 DB_STORAGE_PATH        /db2data4/
             0 DB_STORAGE_PATH        /db2data3/
             0 DB_STORAGE_PATH        /db2data2/
             0 DB_STORAGE_PATH        /db2data1/
             0 LOCAL_DB_DIRECTORY     /db2home/db2inst1/NODE0000/sqldbdir/
             0 DBPATH                 /db2home/db2inst1/NODE0000/SQL00001/
             0 DBPATH                 /db2home/db2inst1/NODE0000/SQL00001/MEMBER0000/

  13 record(s) selected.

Note that the above query truncates the data to make it more readable. You may need to change the numbers in the query (22 and 80) to make sure you see the full length of the data.

What are the types of paths? That is beyond the scope of this db2basics article.

To translate that to actual filesystems, this one-line sh/awk script works well:

$ db2 -x "select path from sysibmadm.dbpaths" | while read p; do df -g $p |tail -1| awk '{ print $7 }'; done | sort -u |uniq

For Linux, the one-liner would be:

$ db2 -x "select path from sysibmadm.dbpaths" | while read p; do df -h $p |tail -1| awk '{ print $6 }'; done | sort -u |uniq

Other Paths to be Aware of

In addition to the active paths for a database, and the instance home path, there are several other important paths to be aware of. Several are related to transaction logs. You can find these, if they’re defined, using:

$ db2 get db cfg for eintake |grep PATH
 Changed path to log files                  (NEWLOGPATH) =
 Overflow log path                     (OVERFLOWLOGPATH) =
 Mirror log path                         (MIRRORLOGPATH) =
 Failover log archive path                (FAILARCHPATH) =
$ db2 get db cfg for eintake |grep LOGARCHMETH |grep DISK
 First log archive method                 (LOGARCHMETH1) = DISK:/db2fs/arch_logs/

The location of the DIAGPATH is also important, and can be found using:

$ db2 get dbm cfg |grep DIAGPATH
 Diagnostic data directory path               (DIAGPATH) = /db2home/db2inst1/sqllib/db2dump/
 Current member resolved DIAGPATH                        = /db2home/db2inst1/sqllib/db2dump/
 Alternate diagnostic data directory path (ALT_DIAGPATH) =
 Current member resolved ALT_DIAGPATH                    =

While not actively required by the database should it be restored to another location, the locations of recent backups can be important. They can be obtained using:

db2 -x "select distinct(location) 
from sysibmadm.db_history 
where operation='B' 
with ur"

This is a one-liner to gather the filesystems from that on AIX:

$ db2 -x "select distinct(location) from sysibmadm.db_history where operation='B' with ur" | while read p; do df -g $p| tail -1 | awk '{print $7}'; done |sort -u |uniq

On Linux, this should work:

$ db2 -x "select distinct(location) from sysibmadm.db_history where operation='B' with ur" | while read p; do df -h $p| tail -1 | awk '{print $6}'; done |sort -u |uniq 

Finally, don’t forget to look for the location of scripts and their output that are executed from cron or other scheduling tools. Staging locations that data files are moved to before loading may be important, as may be administrative scripts regularly executed against the database. These may be obvious from the crontab or enterprise scheduling tools.

Ember Crooks
Ember Crooks

Ember is always curious and thrives on change. She has built internationally recognized expertise in IBM Db2, spent a year working with high-volume MySQL, and is now learning Snowflake. Ember shares both posts about her core skill sets and her journey learning Snowflake.

Ember lives in Denver and work from home

Articles: 554


  1. Sorry – appreciate I’m dragging up an old post here – I was wondering if you had an opinion on the best storage block size to configure for a filsystem for DB2. I am working on a storage performance issue with a client running Linux DB2 servers who has a 512KB block size configured. The default is 4KB I believe, so that seems high, but I can’t find the person who originally configured that to ask why it was chosen. The only other related information I have at the moment is that they told me DB2 commits writes from RAM cache in 4MB chunks. I’m curious if this is something you have any recommendations on?

    Thanks & great blog

    • The storage block size generally needs to be a multiple of the Db2 page sizes being used. Potential page sizes are 4K, 8K, 16K, or 32K. While a page is the smallest unit of data that Db2 works with, often pages are grouped into extents, and extents are the smallest size of data Db2 can read from disk (while still working with individual pages in memory). 512K is kind of a default block size for OLTP database servers. For OLTP database servers, we often want to read very small amounts of data – the prioritization is singleton-row accesses. Larger blocks may make more sense for Data Warehousing or Analytics systems, which tend to read larger chunks of data into memory.

      So the size of “chunks” Db2 is using depends heavily on what page sizes of the most active table spaces are and the extent sizes for those table spaces. The storage block size should be a multiple of page size * extent size.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.