My First IBM IOD – 2013

I don't want to blog too much about the conference because I know that when my favorite bloggers talk about conferences I did not get to go to, it just feels like they are mocking me. I tweeted a lot instead (@ember_crooks), and am keeping the blog entries down to a minimum.

I am new to IBM Information on Demand. I've gone to the IDUG North American Tech Conference off and on for years, but this is my first time at IOD.

I must say, I had a blast. I work from home and travel maybe 10%, so I get very little professional interaction. I sit in my basement writing my blog, and it is hard to really feel the impact that blogging has and the number of you that actually read. At a conference, I find new friends, and nearly everyone already knows me from by blog. People that I idolize walk up and introduce themselves. Needless to say, I have an ego the size of the moon at this point, but it sure was a lot of fun.

My favorite parts of the week had to be Matt Huras signing my copy of the 10.5 flashbook this way:


And this was pretty awesome too:
That’s my picture on the big screen in front of up to thirteen thousand people several times each day.

So yeah, big ego. Double checking all commands to avoid making big-ego mistakes.

I am a copious note-taker. I loved taking notes this conference on my iPad with a bluetooth keyboard. I took pictures of some of the important slides, and was able to easily tag my evernotes with things to indicate possible blog entry topics and such. So below is a rough review of some of the best technical content I saw. I know I don’t capture everything here, but just the highlights of my favorite sessions. It’s a bit messy, but there’s good stuff in my review by day:

Day 1 ( Sunday )

The first day of the confernece for me was Sunday. I flew in early and dedicated most of the day to working on certification tests. I took and passed three of them:

  1. Test 000-614 – 10.1 Advanced DBA for LUW – this test was about what I expected, and the one I had studied for the most. There are some areas it covers (and I feel it's getting broader and broader), that I don't do on an everyday basis
  2. Test 000-311 – 10.5 DBA for LUW upgrade test – this was one of the harder ones I have taken. Since I'm usulally stuck on older versions of DB2, I am usually certifying on things I haven't worked on. I studied for this one a bit, but didn't have the 10.5 flashbook which I would have preferred, since only one chapter was available for free online. It also tested on some recent additions to 10.5 in Fixpack 1 like EHL for pureScale
  3. Test 000-545 – 9.7 SQL Procedure developer – this one I took cold. It was a bit tough, and took time, but I didn't have too much trouble passing. There were a lot of code walkthroughs, and I was thanking my programming professor from college for having made me do so many of those that I was not intimidated. I would not have passed it if I hadn't written my own table functions and a few other things that are not always a daily part of the DBA's workload.

IOD also had a “social VIP” status that they invited some bloggers and tweeters to specific events, including an early get-together on Sunday evening with tours of special lounges and details on special seating areas in the general sessions and keynotes that had tables, power, and wired internet – all things nice when you’re tweeting and/or blogging.

There was an information management reception in the evening, where I spent most of my time chatting with @db2fred and @cristianmolaro and with other friends.


Day 2 ( Monday )

There were a really large number of general sessions, and Monday’s was packed. Literally a basketball arena full of twelve or thirteen thousand people. Anyone who got in late did not get in because there were few empty seats. I loved @jakeporway‘s opening speech – so inspiring. He made me want to be a “data scientist” or at least find some good cause to donate some technical skills to.

I also went to the IM Keynote session on Monday.

Two good sessions I went to on Monday:

IPT-2216A Taking Advantage of the Advanced SQL Features in IBM DB2

Speaker: Dan Luksetich, @DanL_Database
This guy has forgotten more SQL than I ever knew. His slides were pretty good, so I’d recommend downloading them if you can. One of his main premises was that when you count network time, one larger SQL statement is often more efficient than multiple smaller statements.
I’m already a fan of Common Table Expressions and some of DB2 SQL’s advanced features, and have used ranks extensively, but there was definitely stuff here that I haven’t done. I suspect I’ll be going back to this presentation the next time I have to do something recursive, because I always have trouble wrapping my head around recursion. Dan’s explanation of anti-joins was also excellent and something I’ll be revisiting. He introduced me to a way of doing running sums and running averages using syntax “between unbounded preceding and current row” in the olap function’s parens.

IDB-3153A Agile IBM DB2 pureScale Deployment with DB2 10.5 on Intel Xeon processors and High Performance Fabric

Speakers: Jessica Rockwood (@jrockwood), Steve Rees, and Kshitij Doshi
While this was largely a review of how awesome pureScale is when you throw awesome hardware at it, there were some great pureScale details, which was one of the things I was specifically looking for at this conference. One of the big things I got out of it was that though there are still general hardware requirements, the very specific ones have been lifted as of not just DB2 10.5 Fixpack 1, but also as of DB2 10.1 Fixpack 2! There were also some details on Explicit Hierarchical Locking(EHL), which was on the 10.5 upgrade exam in at least one or two questions. Jessica is also working on some interesting things with WebSphere Commerce and I very much want to interact with her on them.

Monday night, I went to a small invitation-only dinner hosted by Scott Hayes and DBI software @dbisoftware. Had a great time talking geek with RJ, Naveen, Patricia, and Bob Proffit (@gokycat)

Day 3 ( Tuesday )

Day 3 started with another general session – these are fun to tweet at, but generally there were more than 50 tweets a minute with the #ibmiod hashtag, so I couldn’t fully keep up.

Again, I’ll pick out my two favorite sessions for the day and go into some detail…

IDB-1258B: IBM DB2 10.5 for Linux, UNIX and Windows versus Oracle Database 12c

Speaker: Chris Eaton
So I know some of this is the Blue propaganda, but since IBM WebSphere Commerce supports both Oracle and DB2, I like to be up on the differences and the good reasons to use DB2 over Oracle. A side note here – Chris Eaton has always been one of my favorite speakers, and man I loved his blog when he was posting highly technical stuff frequently a few years ago. A friend and I were chatting about how we miss the highly technical stuff from him in his newer, higher roles. Anyway,

  • Oracle’s Pluggable databases – which basically means multiple databases in one instance (which DB2 has had forever) – but get this – they share the same SGA!?! I’m just imagining if, in the few situations where I have more than one database per instance, they had to share a bufferpool. I think I would have a heart attack. Even odder, there is only one transaction log for all the databases, so on roll forward, many log entries would have to be looked at that did not apply.
  • Data Guard is at the instance level, unlike HADR, so you can’t just replicate one database while not doing others.
  • pureScale does centralized locking instead of the localized locking that oracle does. The CF facilities in pureScale reduce communication between the members.
  • RAC patch sets still don’t allow rolling upgrades (individual patches do), pureScale now allows rolling fixpacks.
  • Oracle’s compression is more expensive than DB2’s in terms of CPU.
  • Oracle is introducing masking and redaction, which DB2 already had in 10.1 with RCAC, and Oracle’s is more complex.
  • Oracle is also introducing bi-temporal features, again, already in DB2 10.1.
  • Oracle is promising an in-memory feature ‘in the future’, but it sounds like DBAs have to choose what tables should be in-memory, and the columnar representation is built each time at database start in addition to the row format, which remains on disk in addition to the in-memory only columnar representation.

Sure there’s some blue kool-aid there, but some interesting points as well.

IDB-1106A – IBM DB2 Business Continuity Features

Speaker: Dale McInnis
Dale is one of my favorite speakers. I was a bit worried that I’ve seen this session from him before, but actually, it had been updated, and also I found that I had changed since I last saw it. Dale has a way of just laying out the complex concepts around business continuity in such a logical way that they’re easy to understand.

  • RTO: duration of time and a service level witin which a business process mus tbe restored after a disaster or disruption in order to avoid unacceptable consequences
  • RPO: Acceptable amount of data loss measured in time – may changed based on type of failure, and should always be 0 for a single component failure
  • Dale talked about the causes of system interruption and had good slides on where clients are on their plans for disaster recovery and high availability.
  • As I repeat often, Dale states that Disaster Recovery and High Availability are two different goals – Disaster Recovery often includes failing more than just the database server over, and includes a distance component.
  • Generally there’s a shift away from active/passive to active/active.
  • I like his slides that show the various levels:
    Local availability:

    • Bronze: Cluster failover with TSA (shared disk, also available with HACMP/Power-Ha, Linux-HA, Veritas, RHCS, etc) – failover is usually in minutes to tens of minutes
    • Silver: HADR (which they’re ridiculously calling active/active because of ROS, which I thoroughly disagree with) 20-30 second failover
    • Gold: pureScale

    Disaster Recovery:

    • Bronze: Log shipping or storage replication minutes or tens of minutes
    • Gold: Logical replication – active/active – no failover Qrep or CDC (advantage: great for upgrades, different reporting and oltp servers with different indexes)
    • Gold: HADR
    • Situational Platinum: GDPC with pureScale within 60 KM – call the in the lab if you really have to do this as it’s complicated
  • You can do reads on standby (ROS) for auxiliary standbys in multiple standby HADR.
  • You cannot yet drop a pureScale member online, even in 10.5 (so far)

Tuesday night, had dinner with some friends at a hibachi place that was really yummy.

Day 4 ( Wednesday )

The day started with another general session – Serena Williams spoke – was somewhat interesting, and more conversational and felt less scripted than some of the other days.

My favorite Wednesday session was the BLU Expert Panel (IDB-3796A) with speakers Christina Lee, Sam Lighthouse, Michael Kwok, and John Park.
Other than the guy who asked the question “what is BLU” at the beginning (where have you been all conference/year to not know?), there were some great questions during the session. Things that stood out to me included:

  • how DB2 decides when stats are needed for Column-organized tables is not exposed
  • BLU uses automatic workload management – which based on the number of CPUS puts an absolute limit on the number of queries that are executing at once to allow each query reasonable resources. However if the number of cores is a floating number (on an lpar or virtual environment), the experts did not know if this limit would also float.
  • Optimal sampling picks how much data to sample for runstats based on the table size
  • Replication will not work with a column organized table as the source, but will work as the target
  • column-organized table load and mass insert time is as fast or faster than row-organized
  • general processor to memory recommendation is 8 GB per core, but there is a whitepaper on best practices
  • BLU works well on VMWare – even with moving VMs – but sortheap should be set to a fixed value
  • In all versions of DB2, sortheap is miss-named. it’s really a query working area for things like hash joins and group-bys
  • Some BLU technology will likely be applied for OLTP purposes, but soliddb is current in-memory database for transactions
  • SSD is excellent for tempspace, keeping in mind you need high-quality ssd for that much activity
  • For BLU, power processors have a 30-40% advantage over intel processors

Wednesday night, there was a women’s reception that I visited briefly, and then went on to a sparsely-attended idug meet-up where we had fun sitting around and talking about the healthcare website debacle.

Day 5 ( Thursday )

Man, writing all of this just reminds me of how much great stuff there was at the conference. But they saved the best for the last day. My two favorites were on Thursday.

IDB-1150A Database Administration Tips for DB2 10 for Linux, UNIX and Windows

(does that officially missing second comma drive any other grammar nazis nuts?)
Speaker: Melanie Stopfer @mstopfer1
It’s no secret that Melanie is one of my favorite people and favorite speakers. She packs so much into a session, and her presentations are full of great notes. I’ve already downloaded this presentation because it’s so awesome. Her presentation IS a book.
things that struck me:

  • mon_get_transaction_log, mon_get_database, and mon_get_instance table functions – check them out
  • “get snapshot” is deprecated, and may be discontinued as soon as the next version
  • mon_get_auto_maint_queue if you’re doing automatic maintenance to see what’s scheduled
  • mon_get_rts_rqst if you’re doing real-time stats
  • admin_move table awesomeness
    • use to change physical things
    • good for compressing table online!
    • Phases:
      1. Init phase
        1. creates triggers
        2. create target and staging tables
      2. copy phase
      3. replay phase (from staging tables)
      4. swap phase
    • must have unique id on table
    • need two times the space for the table, plus space for transactions while move is in progress
    • there’s a redbook on admin_move_table (I couldn’t find link, but this article sure looks good:
  • param DB2_HADR_BUF_SIZE defaults to 2X the log buffer, but can be increased – available at least back to 9.5
  • HADR_SPOOL_LIMIT – consider setting to avoid congestion issues during reorgs (I will, thank you!)
  • She had excellent slides on compression and history of it
  • Compression estimates should be done one table at a time – whole schema is likely to hang
  • Upgrade tip: update tablespace values to “INHERIT” for OVERHEAD and DISK READ RATE
  • Do all alter tablespace statements in one if you’re doing more than one to eliminate unnecessary work
  • use db2pd -tablespaces trackmodstate to determine if data in a tablespace has changed – can choose not to back up a tablespace that hasn’t changed
  • When creating a new range for a range-partiioned table, create it in the tablespace you want it to end up in
  • Make all indexes partitioned, and then use “require matching indexes” on table attach
  • reorg indexes … reclaim extents is critical for releasing freed index space in 10.1 – NO EQUIVALENT in older versions – space is never released to anything other than other indexes on the same table
  • Archive log compession does not require the storage optimization feature
  • Log path changed and the backup name changed with 10.1

There were a jillion other good things in this session. I couldn’t take notes fast enough.

IDB-322A: IBM DB2 Internals for Database Administrators

Speaker: Matt Huras
This was a double-length session. I LOVE these sessions, and was so disappointed when they weren’t at IDUG in Orlando. There are a number of similarities each time he does them, but oddly enough, I find out each time that I’ve changed and find different stuff is valuable to me. Earlier in my career, it was understanding the process/thread model and all that. this time it was page structures. I’m checking frequently, and will email him if the presentation isn’t up in a week, because there simply is no other place to get a lot of this information. The Info Center gives you a few bits and pieces, but man, there were so many good things on page structure in this presentation. I understand index compression so much better now. and value compression. He also articulated something about alternate page cleaning – it’s good for OLTP, but not so much for DW/DSS/BI with block based bufferpools. There are issues with alternate page cleaning and block based bufferpools.


Geesh, I’m almost as tired after writing this monster of a blog entry as I was after a day of the conference. So much good technical stuff – and this is by no means all of my notes.

Overall, I found the conference a bit large. I felt lost at times, and I feel like I know a fair number of people now – I can only imagine how lost I would have felt if I were a newbie DBA here. I like the intimacy of IDUG better. But there was great value at this conference, too. I’m glad I went and it was sure worth it to me.

Just like IDUG there were some technical sessions (not mentioned here) that were real lemons and some that were pure sales pitch.

It was much better organized overall than the IDUGs I have been to – I liked having breakfast and there was always someone to ask if I was physically lost.

My heart was won by the free diet coke (other sodas too) available to IBM Champions and Gold Consultants in a dedicated lounge. This is my first year with such perks, and anyone who wants to keep me happy should supply copious amounts of diet coke, especially when the hotel shops only had diet pepsi and charged $3.25 per bottle! It was my first conference of any type since winning the IBM Champion title, and it was a fun status bump.

I would love to hear my readers’ comments and thoughts about the conference, too.

(anyone whose twitter handles or linked in profiles I failed to link to here, send them to me – I have probably spent 30 full minutes looking up social links for this article, and cannot find everything)

Ember Crooks
Ember Crooks

Ember is always curious and thrives on change. She has built internationally recognized expertise in IBM Db2, spent a year working with high-volume MySQL, and is now learning Snowflake. Ember shares both posts about her core skill sets and her journey learning Snowflake.

Ember lives in Denver and work from home

Articles: 557


  1. Hi Ember,

    That’s a great summary of (your part) of IOD! While reading your lines, I felt sorry, I opted for IDUG EU instead of going to Las Vegas too. But on the other hand we got also our German RUG started in Barcelona…

    I’m looking forward to meet you again in May in Phoenix… 😉


    • I really want to get to IDUG EU sometime. It’s just harder to sell to management on the travel expenses, even if I speak. Look forward to seeing you in Phoenix!


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.