I have recently had the chance to do several installs on DB2 10.1 and migrate a couple of 9.7 environments to 10.1. I may be late to the party – oddly, IBM takes longer to certify WebSphere Commerce on new versions of DB2 than many external vendors do with other products. But I know there are still many out there who may be in the same boat or going through similar upgrades.
I’ve done a fair amount of research, and I’m not going to list every difference or change here, just a few that have stood out to me and had me thinking about my strategy.
Active Log Path
I don’t believe I was working with DB2 the last time a change in the active log path came along. I’m not talking about the default location here, but just the stuff DB2 adds to the path after whatever you’ve specified for NEWLOGPATH. So for example, if I have it set like this:
db2 update db cfg for sample using NEWLOGPATH /db_logs/SAMPLE
and then recycle or activate the database, this is what I get:
db2 get db cfg for sample |grep "Path to log" Path to log files = /db_logs/SAMPLE/NODE0000/LOGSTREAM0000/
The same actions back in DB2 9.7 or before would have gotten:
$ db2 get db cfg for sample |grep "Path to log" Path to log files = /db_logs/SAMPLE/NODE0000/
Instead of just a node identifier being added to the path, I now have also a “LOGSTREAM” value. This change is actually to support having PureScale and regular DB2 run from the same code base, and makes sense in that context, but it is possible that you may have scripts relying on this location, so you may want to be aware of the change.
Automatic Compression of Log Files
If you have a TSM or other device where your archived log files are sent and compressed, that is great. If, like many of my midrange clients, you do not and logs are simply archived to a local disk, then you may just love archive log file compression. On 9.7 and earlier, I have a simple cron job that goes out and finds archived log files older than usually about 3 days and compresses them to save on disk space. Now, with version 10.1, I can automate that process, which saves me the administration and also some space, since DB2 can work with and uncompress the log files as it needs them without me in the middle. You can enable this feature using:
db2 update db cfg for dbname using LOGARCHCOMPR1 ON
There’s also LOGARCHCOMPR2, and these parameters correspond with LOGARCHMETH1 and LOGARCHMETH2. Obviously if your target for either of these is a device that does compression or dedup, you probably would not want to also enable compression at the db2 level. But enabling this has become a part of my standard build process now.
Use of Storage Groups
I generally have a tablespace or two that I add into my WebSphere Commerce databases to handle some administrative things like event monitors and collecting performance data. Up until now, I’ve placed those in a DMS tablespace with a fixed size to limit the tablespace’s ability to fill up my filesystems. Back on db2 9.7, I used:
create large tablespace DBA32K pagesize 32 K managed by database using (FILE '/db_data_adm/WC005D01/DBA32K.LRG' 2 G) autoresize no bufferpool BUFF32K no file system caching dropped table recovery on@
Now with DB2 10.1, I’ve changed that to:
create stogroup db_adm_stogrp on '/db_data_adm'@ create large tablespace DBA32K pagesize 32 K managed by automatic storage using stogroup db_adm_stogrp autoresize yes initialsize 100 M maxsize 2750 M bufferpool BUFF32K no file system caching dropped table recovery on@
DMS tablespaces are deprecated, and I’m working on getting with the times and properly using storage groups, even for my tiny little admin stuff. It probably doesn’t make a lot of difference at this point, but I like to play with solid new features.
Default of Sampling for Detailed Statistics
In DB2 10.1 the default for detailed statistics changes from full read to sampling. Other statistic defaults stay the same. I have chosen to allow that sampling to happen. I don’t think the difference will be significant for many of my clients, and a number of the queries to my databases are using parameter markers for the most critical workloads anyway. Use of parameter markers disables the use of distribution statistics. Most of my clients do not require sampled statistics in other areas because they can tolerate the runtime of non-sampled statistics.
Other/Miscellaneous
There are a few other things with the conversion from 9.7 to 10.1 that are small or that I’m still thinking about.
- In the upgrades that I have done, I have increased both LOGFILSIZ and LOBUFSZ by about 15% to account for additional logging in 10.1.
- I have plans to create an event monitor for change history – it sounds like a nifty new feature, and something I could use. However, the tables it writes to must be manually pruned, so I have not yet implemented this until I can get a script written/altered to do that.
- I’m also planning to create usage lists for my low-cardinality indexes, since I want to track when and how those are being used. But I need the time to further research them and properly prune their output. I’m also not sure I want them on all of my dev environments – they make more sense on my staging and production environments to me.
So, friends and readers – any experiences with moving from 9.7 to 10.1 that you’d like to share?
Super Ember.. I came to know many new features in 10.1 with your experience by this post
When you talk about “just-upgrade-no-new-features-used” upgrade path, like yours, i would add, that I had to modify scripts that are parsing “db2audit extract” output, there is a new column inserted in the output. Truth is that db2audit is not widely used.
Thanks for the comment. I don’t frequently use db2audit, so I wouldn’t have noticed.
thanks for all your efforts you share for us.
i would like to know if there is any comparison table for different db2 versions and there added feature and functionality.
That would be a very large table. With each new release of DB2, there are lists of features that are added, deprecated, changed, and discontinued. The latter three can be found on the What’s Changed page in the Knowledge Center, while the other can be found on the What’s New page.
These pages are different for each version, and there is no one source for all versions. Also IBM sometimes sneaks features in an earlier version than when they officially became available.
Just going through the documentation 10.1 and seen deprecation of user-defined permanent tablespace of SMS and DMS type in future releases . Both these support Catalog and Temporary tablespace types , but I wonder why the DMS is still been given support when the SMS can handle this catalog and temporary tablespaces types flawlessly ? . Thanks for letting me know DMS being deprecated , I just came to know after I read this post which led me to documentation
They’re giving us time to move off of them and get used to AST and storage groups – which I have a guest blogger writing about later this week.
Hi.. Ember, I installed fixpack 10.1.3 and tried to run db2updv10 to upgrade db’s after running db2iupdate on the instance “db2inst1”, this is the error I received. , unable to upgrade..
[db2inst1@localhost ~]$ db2level
DB21085I This instance or install (instance name, where applicable:
“db2inst1”) uses “32” bits and DB2 code release “SQL10013” with level
identifier “0204010E”.
Informational tokens are “DB2 v10.1.0.3”, “s130918”, “IP23515”, and Fix Pack
“3”.
Product is installed at “/opt/ibm/db2/V10.1”.
=============================================================
[db2inst1@localhost bin]$ ./db2updv10 -d TEST -O
_________________________________________________________________________
_____ DB2 Service Tools _____
I B M
db2updv10
This tool is a service utility designed to update a DB2 Version 10.1
database to the current fix pack level.
_________________________________________________________________________
DB2 Universal Database Version 10.1, 5622-044 (c) Copyright IBM Corp. 2011
Licensed Material – Program Property of IBM
IBM DATABASE 2 Database update to current fix pack tool
MESSAGE: SYSCAT.ROUTINEAUTH had 0 rows successfully updated.
MESSAGE: SYSCAT.MODULEAUTH had 0 rows successfully updated.
MESSAGE: Error granting execute on SYSPROC.ADMIN_REVALIDATE_DB_OBJECTS_NEW to role SYSDEBUGPRIVATE, -204
MESSAGE: Rolling back work due to sqlcode: -204 on line 833.
db2updv10 processing failed for database ‘TEST’.
I tried googling it, but could not find , please adivse…….
thanks chaitanya
I think you’ll have to open a PMR on that one.
Did you find the answer to your -204 error?
Chaitanya, I have the same issue while running db2updv10,
Did you get any response from the PMR
Hi,
Run the upgrade database command to get the database to version 10.1 – db2updv10 command would be used to upgrade an already version 10.1 database to the latest fix pack.
I have heard that the page headers have changed in 10.1. Does that mean that the upgrade process must modify each existing page? I have a big concern about that based on the size of my database and the amount of time it might take, since I have only a short window to do the upgrade. Do you have any info on that?
Which page headers are you asking about? Data page headers? I want to be clear so I can ask some others and get a clear answer for you.
I know that the pages in tablespaces are different between regular and large tablespaces to allow larger tables, but that’s not a change that occurs automatically – you have to move the data yourself.