Today we are going to talk about some random DB2 features that can’t stand in a blog of their own, but are worth discussing nonetheless. These are tidbits I had discovered during “DB2’s Got Talent” presentations, IDUG conferences, or “Hey, look what I discovered” moments.
CPUSPEED (Database Management Configuration)
You blow past this setting every time you execute “db2 get dbm cfg”. It’s located at the very top of your output and is one of the more important settings that is overlooked. The result for this parameter is set after DB2 looks at the CPU and determines how fast instructions process through (millisec/instruction).
Optimizer is influenced greatly by this setting. CPUSPEED is automatically set upon instance creation and is often never examined again. The setting will stay static unless you ask DB2 to re-examine the CPU.
So, why do we care? Well there could be a few reasons.
- An additional CPU was added or subtracted from an LPAR.
- An image of your old server was taken and placed on a new, faster server, with a different type or number of processors.
If for some reason your CPU configuration was altered or changed the new processing speed is not taken into account until it is re-evaluated. So go ahead and add on that additional CPU to handle your black Friday workload. It won’t help much unless DB2 knows to take it into account.
To re-evaluate:
db2 "update dbm cfg using CPUSPEED -1"
Once done, you should see a new CPUSPEED displayed for your DBM configuration.
If you are comparing apples to apples you want to see the number get smaller to show a speed increase. If you are changing architecture (P6 to P7 for example) the number could theoretically go up. Put that in context though, the higher number could be the setting DB2 needs to account for multi-threading or some other hardware change. So it may look worse when it really isn’t.
Once done, DB2 will use the new number to determine proper access paths. So make sure to issue a REBIND once done so your SQL can take advantage of the speed increase.
(Special thanks to Robert Goethel who introduced this topic during the DB2 Night Show competition this year).
RESTRICTIVE (Create Database …. RESTRICTIVE)
I picked this up in Roger Sander’s DB2 Crammer Course at IDUG this year. I had just spent the past two months auditing our database authorities and was frustrated with the amount of PUBLIC access. I even created a separate SQL script to run after new database creation to revoke some of the default PUBLIC authority.
Apparently I reinvented the wheel. If you use the RESTRICTIVE clause in the CREATE DATABASE command no privileges or authorities will automatically be granted to PUBLIC.
For example:
db2 “create database warehouse on /data1 dbpath on /home/db2inst1 restrictive”
DB2_WORKLOAD (System Environment Variable)
This db2set parameter has been available for a while but I know a new option (ANALYTICS) became available with v10.5. Essentially, DB2_WORKLOAD will preset a group of environment variables for your needs. Set it once and go – no need to look up various configurations or develop scripts. This is valuable for various application configurations such as BLU or Cognos.
To activate:
db2set DB2_WORKLOAD <option>
1C | 1C Applications |
Analytics | Analytics Workload |
CM | IBM Content Manager |
COGNOS_CS | Cognos Content Server |
FILENET_CM | Filenet Content Manager |
INFOR_ERP_LN | ERP Baan |
MAXIM | Maximo |
MDM | Master Data Management |
SAP | SAP Environment |
TPM | Tivoli Provisioning Manager |
WAS | Websphere Application Server |
WC | Websphere Commerce |
WP | Websphere Portal |
If you are a Websphere Commerce nerd like Ember, make sure to read her blog on DB2_Workload and how it can be used for Websphere Commerce.
Michael Krafick is an occasional contributor to db2commerce.com. He has been a production support DBA for over 12 years in data warehousing and highly transactional OLTP environments. He was acknowledged as a top ten session speaker for “10 Minute Triage” at the 2012 IDUG Technical Conference. Michael also has extensive experience in setting up monitoring configurations for DB2 Databases as well as preparing for high availability failover, backup, and recovery. He can be reached at “Michael.Krafick (at) icloud (dot) com”. Linked-in Profile: http://www.linkedin.com/in/michaelkrafick. Twitter: mkrafick
Mike’s blog posts include:
10 Minute Triage: Assessing Problems Quickly (Part I)
10 Minute Triage: Assessing Problems Quickly (Part II)
Now, now you two play nice … DB2 and HACMP failover
Technical Conference – It’s a skill builder, not a trip to Vegas.
Why won’t you just die?! (Cleaning DB2 Process in Memory)
Attack of the Blob: Blobs in a Transaction Processing Environment
Automatic Storage Tablespaces (AST): Compare and Contrast to DMS
DB2 v10.1 Column Masking
Automatic Storage (AST) and DMS
Reloacting the Instance Home Directory
Informational Constraints: Benifits and Drawbacks
RESTRICTIVE is cool. But there are some caveats with it. You may find that some tasks you used to be able to do, you can no longer do because PUBLIC had underlying priveleges you didn’t even know about. For example, you can’t just grant CONNECT and SELECT to give SELECT access. You also need to grant USAGE on the SYSDEFAULTUSERWORKLOAD. This is because everything (other than admin requests) route through this workload. PUBLIC has access to that workload by default. Make it RESTRICTIVE and now no one has access to that workload. It now must be granted.
Other things you may need to grant access to are the DB2SECURITYLABEL and SQLSTATE functions. You may also need to hunt and peck for CLP and CLI packages and grant them (remember those change every release and possibly fix pack….).
As you can see, this gets really sticky quick. The easiest way to handle it is to use Role Based Access Control (RBAC). Now you can easily dole out those privileges to those who need it, while locking out everyone else. (I’ll add in another infomercial here about using Trusted Contexts to lock down IDs to specific IPs or hosts. Very handy.)
Just my two cents. 🙂
Chris,
I’ve been using more Role Based Access Control over the past few months. As opposed to granting AIX groups access. I find it gets around all the “gotcha’s” of AIX groups.
Do you find the same thing? Do you prefer role based groups?
-Mike
Group access is horrid. I am realizing that now is that is how we have set things up. It just allows too many oopses in security. I am planning on cutting over to RBAC to have tighter control on who can see and do what. Plus the benefit with moving to roles is I can start to use Trusted Contexts (as I noted above) to lock down IDs to specific systems. This is especially good for service accounts. Set up your Trusted Context for a particular ID and give it the roles you want *ONLY* within that Trusted Context. If that ID tries to connect outside of that IP or host, it can’t connect (also assuming you have either revoked CONNECT on PUBLIC or have a RESTRICTIVE database). *Very handy*. I like this a LOT and am trying to move that way. Plus it also makes it easier to start using Row and Column Access Control (RCAC).
Hi Mark.
I’ve noticed the value calculated depends on how busy the server is. I’m guessing thats something to do with our platform….higher values are returned when the LPAR is busier.
We run on IBM 795 frames with LPARs using CPU virtualization. My guess is the busier the LPAR, the more CPU resource it’s allowed. Whatever the calculation does, it must ‘see’ more CPUs.
Begs the question: if your workload varies during the day, affecting the CPU resources required, is it better to recalc at the busiest time, the quietest time or somewhere in the middle.
Regards
Ken.
Michael, not Mark! Sorry!
Ken,
You discovered something I didn’t and that makes a lot of sense. I couldn’t understand why if I ran the command to re-evaluate CPUSPEED 10x I could get 10 different (sometimes greatly different) answers. Business would affect speed.
As for when is the best time to do this, good question. A second e-mail discussion has spun off about this and I may be going to the labs to ask a clarifying question – I will bring this to their attention as well for a “rule of thumb”. I’ll let you know what they say.
Thanks for the feedback.
Hi Michael.
I had to use ‘db2 update dbm cfg using CPUSPEED -1’ to change the value.
The article suggests ‘db2set CPUSPEED -1’ but I had no joy with that.
Regards
Ken.
Yep, embarrassingly enough I had put the wrong command. I edited the article to correct it. Thanks for the update.
Good info on some params that can easily be missed!
I’ve found that the best way to set CPUSPEED is by having a good benchmark test. Remember, it’s not telling the cpu how fast to run an instruction, it’s telling the optimizer how fast it thinks an instruction will run. I don’t necessarily have to have the most accurate setting to match the CPU speed, I really want the best set of plans to get my workload done the fastest while balancing that with the least amount of resources. I set CPUSPEED in a similar way to AVG_APPLS. I’ve rarely found I need to set AVG_APPLS greater than 2 even though I’ve had workloads with many more concurrent apps than that. For CPUSPEED sometimes I use the default value by setting it to -1, sometimes I set it manually. It depends on the results of my testing.
Excellent advice. So my question to be – what are you looking at when you benchmark. CPU strain, number transactions processed, etc? I think we all have our favorites to look at, but what is the one or two you focus on for this tuning specifically?
I’m always looking for new benchmarks.
For me this is usually a once and done kind of parameter, except for the scenarios listed above in this blog post. I normally don’t try tuning this one. But when I do I’m looking at query runtimes and queries per sec, combined with changes in explain plans – pretty straight forward. I’m looking for the right setting that gives the best throughput without causing any issues.
Not very exciting I know, but that’s the key to this one for me.
I had some wicked problems with memory config such that 9.7 was crashing on Windows, so I ran a db2 update dbm cfg reset, but now my cpuspeed is quite different.
What does that reset do to the cpuspeed? should I set to -1?
Interesting. From the knowledge center, we know that “update db cfg reset” will “set documented default configuration values”. For CPUSPEED, the default is -1 which means “reassess and set”. You essentially ran CPUSPEED -1.
So my question is:
Did you have a baseline of how queries ran BEFORE the reset and how are they running now?
I started to hyperfocus on CPUSPEED after I wrote that article and found I could run it 10x in 10 minutes and get a different result each time. It seem to be fickle and a little to “willy nilly” (per my managers comments) for me to put in place and truly test. (I worked for a fairly conservative company).
My recommendation is this:
1. If you are faster, keep it.
2. If you don’t know or are nervous, try to set the value back to its original setting. But leaving it where it is at should only help.
Sorry for the ambiguous answer. But it really depends on testing against a baseline. However I have only heard of good things by issuing a -1.