This is part two of a three part series. The full series includes:
DB2 Workload Manager (WLM) as a Monitoring Solution – Understanding WLM (Part1 of 3)
DB2 Workload Manager (WLM) as a Monitoring Solution– How to Set up WLM (Part 2 of 3)
DB2 Workload Manager (WLM) as a Monitoring Solution– Analyzing WLM Information (Part 3 of 3)
The previous article, DB2 Workload Manager (WLM) as a Monitoring Solution – Understanding WLM (Part1 of 3), defined what the DB2 Workload Manager is and described the various database objects that make up the DB2 Workload Management ecosystem.
This article will explore steps needed to set up a very simple monitoring strategy.
First, define a monitoring strategy – what do you want to monitor and why?
Second, create a tablespace that will hold monitoring data.
Third, create event monitors that would capture activity and aggregate statistics data for database activities of interest. These are special activity event monitors associated with WLM.
Finally, implement the monitoring strategy that defines a service superclass, service subclasses, and workloads around the monitoring strategy. Finally, we shall turn on the ‘switch’ that starts collecting monitoring data.
A flowchart of the process:
Detailed Plan for Implementation
- Define Monitoring Strategy
In this example, we have a reporting application which runs reports daily. There are other applications which also interact against the same database but we are only interested in targeting our monitoring efforts towards the reporting application.
The reporting application runs from two hosts whose IP addresses are 188.8.131.52 and 184.108.40.206. We shall create a service superclass called REPORTING and a service sub class called SLARPTSC. We then create a workload called SLARPTWL and define it to pull all activities coming from these two hosts and dump it in the SLARPTSC service subclass under the REPORTING superclass.
- Creating the Monitoring tablespace
In our exercise we are using a single partition database. We will be creating the tablespace as automatic storage. The DDL that can be used:
CREATE LARGE TABLESPACE "TBLSP_WLM" IN DATABASE PARTITION GROUP IBMDEFAULTGROUP PAGESIZE 4096 MANAGED BY AUTOMATIC STORAGE USING STOGROUP "IBMSTOGROUP" AUTORESIZE YES INITIALSIZE 1 M INCREASESIZE 20 M MAXSIZE 5 G EXTENTSIZE 16 PREFETCHSIZE AUTOMATIC BUFFERPOOL "BP4K" DATA TAG INHERIT OVERHEAD 7.500000 TRANSFERRATE 0.060000 NO FILE SYSTEM CACHING DROPPED TABLE RECOVERY ON;
If you are creating a WLM solution for a DB2 DPF environment, make sure that the monitoring tablespace is spread across all database partitions that exist in the database.
- Create the WLM event monitors
The WLM event monitors are special write to table event monitors that capture monitoring data and load them into physical tables. The activity event monitor captures information on specific database activities while the statistics event monitor captures aggregate performance data.
The DDL we used is below.
-- Define activity event monitor as DB2ACTIVITIES CREATE EVENT MONITOR DB2ACTIVITIES FOR ACTIVITIES WRITE TO TABLE ACTIVITY (TABLE ACTIVITY_DB2ACTIVITIES IN TBLSP_WLM PCTDEACTIVATE 100), ACTIVITYSTMT (TABLE ACTIVITYSTMT_DB2ACTIVITIES IN TBLSP_WLM PCTDEACTIVATE 100), ACTIVITYVALS (TABLE ACTIVITYVALS_DB2ACTIVITIES IN TBLSP_WLM PCTDEACTIVATE 100), CONTROL (TABLE CONTROL_DB2ACTIVITIES IN TBLSP_WLM PCTDEACTIVATE 100) AUTOSTART; -- Define statistics event monitor as DB2STATISTICS CREATE EVENT MONITOR DB2STATISTICS FOR STATISTICS WRITE TO TABLE SCSTATS (TABLE SCSTATS_DB2STATISTICS IN TBLSP_WLM PCTDEACTIVATE 100), WCSTATS (TABLE WCSTATS_DB2STATISTICS IN TBLSP_WLM PCTDEACTIVATE 100), WLSTATS (TABLE WLSTATS_DB2STATISTICS IN TBLSP_WLM PCTDEACTIVATE 100), QSTATS (TABLE QSTATS_DB2STATISTICS IN TBLSP_WLM PCTDEACTIVATE 100), HISTOGRAMBIN (TABLE HISTOGRAMBIN_DB2STATISTICS IN TBLSP_WLM PCTDEACTIVATE 100), CONTROL (TABLE CONTROL_DB2STATISTICS IN TBLSP_WLM PCTDEACTIVATE 100) AUTOSTART;
- Create Service Classes, Service Subclasses and Workloads
The DDL for creating the service class and subclass:
CREATE SERVICE CLASS REPORTING; CREATE SERVICE CLASS SLARPTSC UNDER REPORTING;
The DDL for creating the workload:
CREATE WORKLOAD SLARPTWL ADDRESS ('220.127.116.11', '18.104.22.168') service class SLARPTSC under REPORTING;
Once the service class and workloads are created, you can always find information about them via the following.
a) System Catalog views like syscat.serviceclasses and syscat.workloads
b) DB2PD utilitiy
• db2pd –db TESTDB –serviceclasses • db2pd –db TESTTB -workloads
c) Table functions
SELECT SUBSTR(WORKLOAD_NAME, 1, 22) AS WL_DEF_NAME, WLO_COMPLETED_TOTAL, CONCURRENT_WLO_ACT_TOP FROM TABLE(WLM_GET_WORKLOAD_STATS(CAST(NULL AS VARCHAR(128)), -2)) AS WLSTATS WL_DEF_NAME WLO_COMPLETED_TOTAL CONCURRENT_WLO_ACT_TOP ---------------------- -------------------- -------------------------------------------------------------------------------------------- SLARPTWL 0 1 SYSDEFAULTUSERWORKLOAD 190 2 SYSDEFAULTADMWORKLOAD 0 0
- Enabling Monitoring Data Collection
Performance monitoring data collection can be enabled at an activity level as well as an aggregate level. Please look up “create service class” and “create workload statements” in IBM Knowledge Center for more information on data collection at activity and aggregate levels.
In this case, we are going to collect aggregate activity data extended and aggregate request data base at the service class level and aggregate activity data at the workload level.
Note: the below link points to an useful tutorial on WLM in IBM Knowledge Center
The DDL to do this is:
ALTER SERVICE CLASS SLARPTSC UNDER REPORTING COLLECT AGGREGATE ACTIVITY DATA EXTENDED; ALTER SERVICE CLASS SLARPTSC UNDER REPORTING COLLECT AGGREGATE REQUEST DATA BASE; ALTER WORKLOAD SLARPTWL COLLECT AGGREGATE ACTIVITY DATA EXTENDED;
Whenever you want to drop a service class or workload, you need disable it first. Then in the drop service class statement you need to mention the superclass and subclass relationship, otherwise you will be unable to drop them.
Below are statements we can use to drop the service classes and workloads we created earlier.
alter workload slarptwl disable drop workload slarptwl alter service class slarptsc under reporting disable drop service class slarptsc under reporting alter service class reporting disable drop service class reporting
- Granting permission to use the workload
You must grant appropriate permissions to use the workload. In this case we grant usage on workload to group wlmuser:
GRANT USAGE ON WORKLOAD SLARPTWL TO WLMUSER;
Please note: You need to ensure all users coming through the two hosts are members of the group ‘wlmuser’, otherwise they will not be mapped to the workload.
- Data Collection
Now that we have set up the infrastructure, we need to collect performance monitoring data and load the data into event monitoring tables.
This can be done two ways:
• CALL SYSPROC.WLM_COLLECT_STATS()
db2 => CALL SYSPROC.WLM_COLLECT_STATS() Return Status = 0
This procedure will manually load the performance monitoring data into the WLM event monitoring tables and reset them from memory.
• The database configuration parameter WLM_COLLECT_INT
This parameter will automate the collection of data every N minutes according to the value provided. Use the ‘get db cfg and update db cfg’ command to view and update this database configuration parameter.
db2 get db cfg for testdb | grep WLM_COLLECT_INT WLM Collection Interval (minutes) (WLM_COLLECT_INT) = 240 db2 update db cfg for testdb using WLM_COLLECT_INT 240
Details on monitoring data collection process
DB2 workload management objects can be used to collect two kinds of data
a) Aggregate statistics – These will gather aggregate level metrics like average execution time, average CPU utilization, etc. This is an inexpensive way to look at service classes as a whole.
b) Activity information – This collects granular activity level information and can be used to troubleshoot a specific issue. This can be very resource intensive.
Whenever a service class, workload, or work class is enabled to collect either statistics or activity level data, monitoring data gets saved in memory. Table functions like WLM_GET_SERVICE_SUBCLASS_STATS and WLM_GET_WORKLOAD_STATS can be used to look at statistics stored in memory. This is covered in greater detail in part three of this series.
With the use of SYSPROC.WLM_COLLECT_STATS() procedure or by setting WLM_COLLECT_INT database configuration parameter to a certain value, we can call the statistics and activities event monitor and use it to get the data from memory to store in event monitor tables. It is important to note that any calls to the SYSPROC.WLM_COLLECT_STATS() procedure or automatic collection of the monitoring data into the event monitor tables via the WLM_COLLECT_INT database configuration parameter will reset the monitoring data in memory. This way we can obtain statistic trending. The event monitor loads this monitoring information to the WLM event monitor tables we created earlier.
At this point you should be able to see the following event monitor tables created. Notice the *_db2activities table names which maintains activity type information and *_db2statistics table names which maintains aggregate performance data.
>db2 list tables for schema inst01 Table/View Schema Type Creation time ------------------------------- --------------- ----- -------------------------- ACTIVITYSTMT_DB2ACTIVITIES db2inst01 T 2014-05-28-22.214.171.1241735 ACTIVITYVALS_DB2ACTIVITIES db2inst01 T 2014-05-28-126.96.36.1997127 ACTIVITY_DB2ACTIVITIES db2inst01 T 2014-05-28-188.8.131.529954 CONTROL_DB2ACTIVITIES db2inst01 T 2014-05-28-184.108.40.2069138 CONTROL_DB2STATISTICS db2inst01 T 2014-05-28-220.127.116.116357 HISTOGRAMBIN_DB2STATISTICS db2inst01 T 2014-05-28-18.104.22.1685142 QSTATS_DB2STATISTICS db2inst01 T 2014-05-28-22.214.171.1243199 SCSTATS_DB2STATISTICS db2inst01 T 2014-05-28-126.96.36.1996101 WCSTATS_DB2STATISTICS db2inst01 T 2014-05-28-188.8.131.522970 WLSTATS_DB2STATISTICS db2inst01 T 2014-05-28-184.108.40.2063555
Pruning data from WLM tables over time
There is really no utility or command to prune data from the WLM tables. You would need to create your own delete statements to prune data from these tables.
This article covered the steps involved in implementing and setting up a WLM monitoring strategy. In the final article we shall look at some interesting performance reports and histograms pulled from this configuration.
Abhik Roy is currently associated with Experian. He has over 10 years of consulting experience in the database technology space with strong focus of big data technologies and parallel processing architectures. His current assignments involve DB2 on distributed, Netezza and Mongo and he is passionate about anything related to databases. He also has a passion for photography and loves experimenting with his camera. He can be reached at email@example.com