Scripting for DB2 LUW – Best Practices for Perl and Other Languages – Part 2

This is my own set of basic scripting standards and practices. I’d love to hear criticisms, suggestions and comments from readers on these. Some of these would be basic “duh” moments for a developer, but for a DBA from a different background, may be more enlightening. This ended up being so long that I’ve split it into three parts.

Part 1 was published earlier in the week. This is Part 2.

Have a Configuration File

There’s not much more frustrating than setting up a new sever and having to scan through the entire text of a script to find the variables you need to change in order to get the script to run on a different set up. And then going back and changing things when it didn’t work. And possibly accidentally changing something that shouldn’t have been changed.

The obvious solution to this is to pull out any parameters that could/should/would change for a different server into a configuration file. My preference is for that file to also be a Perl script with the variables to set with lots of comments to explain them in the beginning, and basic error checking for those variables at the end. In Perl, that script can then be sourced like this:

require "db2.get_snaps.cfg.pl";

It is also a valid option to have the config file be simply a text file in xml or some proprietary text format and then have your script parse it to set variables, with error checking after it has been parsed.

Global vs. Per-Script Configuration Files

This is a debate I re-visit from time to time. I prefer to have a separate configuration file for each script. This allows me fine-grained control on what happens to what databases, where output goes, who gets emails, and things like this. It also allows me to have most of my scripts execute on all databases, but a few execute for just one database at a time.

The main disadvantage I see in an instance-wide configuration file  is that you then have to specify in each and every script file where to find that global configuration file. Even if it’s a relative path, that can be problematic in my experience. The advantage in a global configuration file is that if you add a database and want to include it in your maintenance or need to change the list of people getting emails from a script, there’s only one place to change it. The same goes for adding/removing email addresses, etc.

You’ve probably noticed that I have very strong opinions on most things, but this is truly one where I’m more on the fence. I suppose I should give the global config file a try sometime to really get a feel for it. Maybe a two-tiered approach where values are set in a global configuration file, but can be overridden by per-script configuration values.

Allow Specification of Parameters on the Command Line

So whatever your config file situation, there are some values you should allow users to specify on the command line that override the values in the config file. The most obvious example for me in my scripts is the database name. There are times when I want to only runstats say on a specific database and not all of them on a multi-database system. My scripts have a –db option available for this. I also have a –noemail option on some of my scripts so I can run it at the command line while I’m working on something without blast-emailing my team with expected failures. Some scripts require command line options – my runstats/reorg script for example can be run just for runstats, or for both runstats and reorgs, and I’ve set it up so that you have to tell it which one you expect it to do. It depends heavily on the purpose of the script what options you allow/require.

Have Options for Usage and Version

Whatever other command line options you have, you should always include a standard way to get the version and a standard way to get the usage diagram. I use -v and -h or –version and –help. You don’t expect to be the only person to ever use your scripts, do you? I use the version option all the time when I’m reviewing a server to make sure I have the most recent versions of scripts on it. I even use the usage diagrams from time to time if I’m executing something and want to remember how to do a certain thing.

Check all commands for failure

If I had to list one pet peeve in scripts DBAs write, this would be it. Many scripts focus just on getting the commands executed and don’t stop to make sure one command was successful before moving on to the next one. Many times you can get away with this, but what if you were only keeping one backup image on disk, and your backup command failed, but your script went right on and deleted the only backup image on disk? Not Ideal.

There are actually two levels you have to be concerned with here. The first is the system level. This is relatively easy to handle, using just “or die” or “or warn” on the end of statements and specifying meaningful text. Here’s an example of that:

mkdir $db_arch_dir, 0755 or warn "Cannot make snapshot".
 "directory: $db_arch_dir $!\n";

But there’s another level beyond that. Assuming you’re not using the Perl DBI, and you execute a DB2 command using a system or exec type call, Perl will always see the command as successful if it was able to find “db2” and pass the command to it. Even if you get an SQL error message and the command fails entirely from the DB2 perspective. So for this kind of command, you have to do BOTH the error checking above AND parse through the output to see if it was successful.

I use several subroutines to parse through the output. You can use several methods to look at the output. You can look for the successful completion message and warn/die if you don’t get it. Or you can look for an error (almost anything where the line starts with ‘SQL’), and warn/die if you get it. Some commands you also are looking for a specific SQL code as a response, so you can look for that code and warn/die if you don’t get it.I also even have one script that parses out the error code and takes different actions depending on what the SQL code is.

There’s also the matter of tracking errors. I don’t want my whole runstats script to die if one table fails, but I do have a counter for them, so at the bottom of my output I report how many errors encountered so I can take a quick look with tail to determine if there were any errors, and dig deeper to investigate.

Error checking and appropriate action should be done for EVERY command you run.

 

The final entry in this series covers Perl specifics and should be posted next week.

Ember Crooks
Ember Crooks

Ember is always curious and thrives on change. She has built internationally recognized expertise in IBM Db2, spent a year working with high-volume MySQL, and is now learning Snowflake. Ember shares both posts about her core skill sets and her journey learning Snowflake.

Ember lives in Denver and work from home

Articles: 544

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.