Well, I managed to have my first week since December with no post last week. I was litterally doing something at the conference from 8 AM to 10 PM every day, and trying to fit regular work in the gaps.
Ah, but what a conference it was. So much great information, ideas, and new connections. On a personal level, I got 4 new certifications:
- IBM Certified Database Administrator – DB2 10.1 for Linux, UNIX, and Windows
- IBM Certified Database Associate – DB2 10.1 Fundamentals
- IBM Certified Advanced Database Administrator – DB2 9.7 for Linux, Unix and Windows
- IBM Certified Database Administrator – DB2 9.7 for Linux, Unix and Windows
I was shocked to pass the 10.1 DBA test, seeing as I’ve never logged on to a 10.1 system – it was just released within the last couple of weeks.
Right now, it’s all a bunch of random thoughts, but I’m sharing them anyway. Here’s my braindump list:
- Interesting thoughts on multi-tenancy from the spotlight session. Wondering how backups or the tricks we use for non-logged deletes would work in the most shared scenarios
- Phil Gunning’s OLTP tuning tips session … he had a good query for top problem sql with ranking, I want to investigate db2pd -tcbstats more to look for updates that didn’t change anything, overflow reads, and to look for unused indexes – he did say that updates to last used are not immediate, and that one must watch this over time, since values are only since the last database activation. I want to look more into mon_req_metrics
- Alber Grankin’s presentation on Best practices for Web-Driven OLTP workloads had some interesting points on working with Websphere. I did not previously know you could configure a connection lifetime for connection pools, and had not though to use the port from the application snapshot to determine which jvm a particular connection comes from. His comments on purging and suging and parameters to deal with them were interesting. He recommended a fixed limit for max_connections, which is an interesting approach. I was also interested in his comments on UOW_TOTAL_TIME and CONNECTION_IDLE_TIME, and how I could use them in conjunction with a force application action to do something similar to what I do now with the db2governor to force off connections that have been idle for more than 4 hours. I also noted that if I did this, I’d want to run an event monitor to capture the SQL of connections that violated the threshold.
- I’m pretty sure I would learn something from Melanie Stopfer if she decided to present on left-handed leprechauns. Her session on Advanced Utilities had some good information. She had a slide that I must download on restores and when dbcfg is overwritten and when it’s not. Also a link about multiple loads into one table that I need to check out. 9.7 db2pd has an index option for reorgs.
- Dale McInnis spoke on backups. He convinced me that I need to review my util_heapsz. I also want to test backup duration with multiple targets even though I rarely have fully separate I/O paths – maybe I could improve backup performance. DB2 has never backed up free extents/pages in DMS tablespaces (Oracle has a backup method that does), or Temporary tablespaces. We should aim for archiving a log every 30-60 minutes – I need to review my systems for this.
- As in the past, I’d have to say my favorite sessions were the deep-dive internals sessions. Though not an entire revelation like the first time I sat in on them, I still learn enormous amounts that is not easy to find elsewhere. My notes from this conference include that DB2 transaction logs include both undo and redo (something I haven’t been clear on since learning Oracle’s approach last year). He had slides on exactly what information is stored when. Log replay is done in parallel. The information on PureScale was particularly interesting – use of a group bufferpool and such. The wlm_dispatcher is estimated to take 3% overhead. The dipatcher controls the number of threads each workload sees as ready. SMS tablespaces are deprecated in DB2 10. On delete, page space is not reclaimed immediately, but is held until the transaction is older than either the oldest transaction accessing the table or the oldest modifying transaction in the database. VALUE COMPRESSION – a very limited kind of compression that can be used free of charge – only compresses nulls and system default. It doesn’t help indexes. Currently committed requires extra overhead on update. Interesting concept of sparse tables and insert time clustered tables.
- In Paul’s session on the package cache, there was some good info. If you get SQL0954 on the appl heap, even when you’ve got it at the maximum, try setting it to automatic.
- Not sure where I heard this, but if you’ve got indexes that are constantly ascending, you may want to consider a setting called page split high. Update: this was in the VSP for DBI – Scott Hayes. Good session for a VSP, and I’m sad a work emergency called me out of Scott’s later session about a client Black Friday experience, especially since that client was running WebSphere Commerce.
- HADR – I’m very interested in the multiple standby thing. I also made a request that they incorporate clearing up odd TSA states into db2haicu, and I think I was heard, since Dale mentioned it again on Thursday afternoon in the panel. On upgrade, you have to upgrade your auxiliary standby(s) before your principal standby. You can limit how much logs spool using hadr_spool_limit to deal with log spikes. there is a parameter in 9.5 and above that allows you to block most non-logged actions.
- DB2 10 allows up to 256 storage groups
- From Mike Krafick’s Ten-minute triage – consider having a triage folder with scripts and tools specifically for troubleshooting an outage. db2nps can be used to list processes specifically for the current instance. The presentation is worth downloading for the signal list alone. This was a good presentation on methodology in a system down situation, and also for some the in-depth troubleshooting for actual bugs from Pavel
- A session from Melanie Stopfer on Optimizer profiles – profiles are stored in the dbheap – profiles must be flushed for changes to take effect. Among other things, the dbheap is used to cache information about trusted contexts, Audit, and optimizer profiles. “Left out is a lowly feeling” – trick for remembering when looking at explain plans that the left/outer table in the join is not always on the left, but it is always the one with the lower operation number. There’s a good paper on developer works if you query “Availability Topics”.
- From Calisto Zuarte’s session on ZigZag joins and jump scans – Runstats can be faster after a reorg, especially if you haven’t reorged in a while. I should consider doing a series of blog posts on each type of join.
- From Melanie Stopfer’s session on Auditing. Major shift in auditing in 9.5 – much better. You can’t assign an audit policy to a view. She did a great description on trusted contexts. I know I’ve been missing those questions on the cert tests
I would love to see a best practices from a eCommerce or OLTP standpoint. But deeply technical. I remember some various DB2 settings that were very sensitive to say a corresponding sister setting in WebSphere. Or watch for this or that when you expect X amount of transactions. Not a general tips and ticks session, but a deep dive.
Just my two cents though.
I’d love to do a deep dive into DB2 for WebSphere Commerce – but I worry about the appeal and how much audience there would be for that. I’ll have to think about how I could back it up a level to just address e-commerce with maybe some WebSphere tweaks.
Let IDUG sort out if it is too narrow or not, and submit two abstracts if you are concerned. My presentation on “10 minute triage” was also specifically to very new DBA’s or those cross-training from another discipline and I pulled 20 people when 4 LUW presentations were going on. Your name will pull more people too.
You also can apply the eCommerce tweaks to OLTP in general so it isn’t single threaded. It applies to banks, commerce sites, etc.
Ember, thank you for sharing all that. I wish I was there too…
Thanks!