DB2 Workload Manager (WLM) as a Monitoring Solution– How to Set up WLM (Part 2 of 3)

Posted by

Abhik_Headshot
This is part two of a three part series. The full series includes:
DB2 Workload Manager (WLM) as a Monitoring Solution – Understanding WLM (Part1 of 3)
DB2 Workload Manager (WLM) as a Monitoring Solution– How to Set up WLM (Part 2 of 3)
DB2 Workload Manager (WLM) as a Monitoring Solution– Analyzing WLM Information (Part 3 of 3)

The previous article, DB2 Workload Manager (WLM) as a Monitoring Solution – Understanding WLM (Part1 of 3), defined what the DB2 Workload Manager is and described the various database objects that make up the DB2 Workload Management ecosystem.
This article will explore steps needed to set up a very simple monitoring strategy.

Process Overview

First, define a monitoring strategy – what do you want to monitor and why?
Second, create a tablespace that will hold monitoring data.
Third, create event monitors that would capture activity and aggregate statistics data for database activities of interest. These are special activity event monitors associated with WLM.
Finally, implement the monitoring strategy that defines a service superclass, service subclasses, and workloads around the monitoring strategy. Finally, we shall turn on the ‘switch’ that starts collecting monitoring data.
A flowchart of the process:

wlm_flowchart

Detailed Plan for Implementation

  1. Define Monitoring Strategy
    In this example, we have a reporting application which runs reports daily. There are other applications which also interact against the same database but we are only interested in targeting our monitoring efforts towards the reporting application.
    The reporting application runs from two hosts whose IP addresses are 101.7.64.98 and 101.7.64.99. We shall create a service superclass called REPORTING and a service sub class called SLARPTSC. We then create a workload called SLARPTWL and define it to pull all activities coming from these two hosts and dump it in the SLARPTSC service subclass under the REPORTING superclass.
  2. Creating the Monitoring tablespace
    In our exercise we are using a single partition database. We will be creating the tablespace as automatic storage. The DDL that can be used:

    CREATE LARGE TABLESPACE "TBLSP_WLM" IN DATABASE PARTITION GROUP IBMDEFAULTGROUP
             PAGESIZE 4096 MANAGED BY AUTOMATIC STORAGE
             USING STOGROUP "IBMSTOGROUP"
             AUTORESIZE YES
             INITIALSIZE 1 M
             INCREASESIZE 20 M
             MAXSIZE 5 G
             EXTENTSIZE 16
             PREFETCHSIZE AUTOMATIC
             BUFFERPOOL "BP4K"
             DATA TAG INHERIT
             OVERHEAD 7.500000
             TRANSFERRATE 0.060000
             NO FILE SYSTEM CACHING
             DROPPED TABLE RECOVERY ON;

    If you are creating a WLM solution for a DB2 DPF environment, make sure that the monitoring tablespace is spread across all database partitions that exist in the database.

  3. Create the WLM event monitors

    The WLM event monitors are special write to table event monitors that capture monitoring data and load them into physical tables. The activity event monitor captures information on specific database activities while the statistics event monitor captures aggregate performance data.
    The DDL we used is below.

    --  Define activity event monitor as DB2ACTIVITIES
    CREATE EVENT MONITOR DB2ACTIVITIES
        FOR ACTIVITIES
        WRITE TO TABLE
        ACTIVITY (TABLE ACTIVITY_DB2ACTIVITIES
                  IN TBLSP_WLM
                  PCTDEACTIVATE 100),
        ACTIVITYSTMT (TABLE ACTIVITYSTMT_DB2ACTIVITIES
                      IN TBLSP_WLM
                      PCTDEACTIVATE 100),
        ACTIVITYVALS (TABLE ACTIVITYVALS_DB2ACTIVITIES
                      IN TBLSP_WLM
                      PCTDEACTIVATE 100),
        CONTROL (TABLE CONTROL_DB2ACTIVITIES
                 IN TBLSP_WLM
                 PCTDEACTIVATE 100)
        AUTOSTART;
    
    --  Define statistics event monitor as DB2STATISTICS
    CREATE EVENT MONITOR DB2STATISTICS
        FOR STATISTICS
        WRITE TO TABLE
        SCSTATS (TABLE SCSTATS_DB2STATISTICS
                 IN TBLSP_WLM
                 PCTDEACTIVATE 100),
        WCSTATS (TABLE WCSTATS_DB2STATISTICS
                 IN TBLSP_WLM
                 PCTDEACTIVATE 100),
        WLSTATS (TABLE WLSTATS_DB2STATISTICS
                 IN TBLSP_WLM
                 PCTDEACTIVATE 100),
        QSTATS  (TABLE QSTATS_DB2STATISTICS
                 IN TBLSP_WLM
                 PCTDEACTIVATE 100),
        HISTOGRAMBIN (TABLE HISTOGRAMBIN_DB2STATISTICS
                      IN TBLSP_WLM
                      PCTDEACTIVATE 100),
        CONTROL (TABLE CONTROL_DB2STATISTICS
                 IN TBLSP_WLM
                 PCTDEACTIVATE 100)
        AUTOSTART;
  4. Create Service Classes, Service Subclasses and Workloads

    The DDL for creating the service class and subclass:

    CREATE SERVICE CLASS REPORTING;
    CREATE SERVICE CLASS SLARPTSC UNDER REPORTING;

    The DDL for creating the workload:

    CREATE WORKLOAD SLARPTWL ADDRESS ('101.7.64.98', '101.7.64.99') service class SLARPTSC under REPORTING;

    Once the service class and workloads are created, you can always find information about them via the following.

    a) System Catalog views like syscat.serviceclasses and syscat.workloads
    b) DB2PD utilitiy

    •  db2pd –db TESTDB –serviceclasses
         •  db2pd –db TESTTB -workloads

    c) Table functions

    SELECT SUBSTR(WORKLOAD_NAME, 1, 22) AS WL_DEF_NAME, WLO_COMPLETED_TOTAL, CONCURRENT_WLO_ACT_TOP FROM TABLE(WLM_GET_WORKLOAD_STATS(CAST(NULL AS VARCHAR(128)), -2)) AS WLSTATS
    
    WL_DEF_NAME                               WLO_COMPLETED_TOTAL                 CONCURRENT_WLO_ACT_TOP
    ---------------------- -------------------- --------------------------------------------------------------------------------------------
    SLARPTWL                                        0                                           1
    SYSDEFAULTUSERWORKLOAD                          190                                         2
    SYSDEFAULTADMWORKLOAD                            0                                           0
  5. Enabling Monitoring Data Collection

    Performance monitoring data collection can be enabled at an activity level as well as an aggregate level. Please look up “create service class” and “create workload statements” in IBM Knowledge Center for more information on data collection at activity and aggregate levels.
    In this case, we are going to collect aggregate activity data extended and aggregate request data base at the service class level and aggregate activity data at the workload level.

    Note: the below link points to an useful tutorial on WLM in IBM Knowledge Center
    (http://www-01.ibm.com/support/knowledgecenter/?lang=en#!/SSEPGG_9.7.0/com.ibm.db2.luw.admin.wlm.doc/doc/c0053139.html).

    The DDL to do this is:

    ALTER SERVICE CLASS SLARPTSC UNDER REPORTING COLLECT AGGREGATE ACTIVITY DATA EXTENDED;
    ALTER SERVICE CLASS SLARPTSC UNDER REPORTING COLLECT AGGREGATE REQUEST DATA BASE;
    ALTER WORKLOAD SLARPTWL COLLECT AGGREGATE ACTIVITY DATA EXTENDED;

    Whenever you want to drop a service class or workload, you need disable it first. Then in the drop service class statement you need to mention the superclass and subclass relationship, otherwise you will be unable to drop them.
    Below are statements we can use to drop the service classes and workloads we created earlier.

    alter workload slarptwl disable
    drop workload slarptwl
    alter service class slarptsc under reporting disable
    drop service class slarptsc  under reporting
    alter service class reporting disable
    drop service class reporting
  6. Granting permission to use the workload

    You must grant appropriate permissions to use the workload. In this case we grant usage on workload to group wlmuser:

    GRANT USAGE ON WORKLOAD SLARPTWL TO WLMUSER;

    Please note: You need to ensure all users coming through the two hosts are members of the group ‘wlmuser’, otherwise they will not be mapped to the workload.

  7. Data Collection

    Now that we have set up the infrastructure, we need to collect performance monitoring data and load the data into event monitoring tables.

    This can be done two ways:

    • CALL SYSPROC.WLM_COLLECT_STATS()

    db2 => CALL SYSPROC.WLM_COLLECT_STATS()
    Return Status = 0

    This procedure will manually load the performance monitoring data into the WLM event monitoring tables and reset them from memory.
    • The database configuration parameter WLM_COLLECT_INT
    This parameter will automate the collection of data every N minutes according to the value provided. Use the ‘get db cfg and update db cfg’ command to view and update this database configuration parameter.

    db2 get db cfg for testdb | grep WLM_COLLECT_INT
    WLM Collection Interval (minutes)     (WLM_COLLECT_INT) = 240
    
    db2 update db cfg for testdb using WLM_COLLECT_INT 240

Details on monitoring data collection process

DB2 workload management objects can be used to collect two kinds of data
a) Aggregate statistics – These will gather aggregate level metrics like average execution time, average CPU utilization, etc. This is an inexpensive way to look at service classes as a whole.
b) Activity information – This collects granular activity level information and can be used to troubleshoot a specific issue. This can be very resource intensive.
Whenever a service class, workload, or work class is enabled to collect either statistics or activity level data, monitoring data gets saved in memory. Table functions like WLM_GET_SERVICE_SUBCLASS_STATS and WLM_GET_WORKLOAD_STATS can be used to look at statistics stored in memory. This is covered in greater detail in part three of this series.
With the use of SYSPROC.WLM_COLLECT_STATS() procedure or by setting WLM_COLLECT_INT database configuration parameter to a certain value, we can call the statistics and activities event monitor and use it to get the data from memory to store in event monitor tables. It is important to note that any calls to the SYSPROC.WLM_COLLECT_STATS() procedure or automatic collection of the monitoring data into the event monitor tables via the WLM_COLLECT_INT database configuration parameter will reset the monitoring data in memory. This way we can obtain statistic trending. The event monitor loads this monitoring information to the WLM event monitor tables we created earlier.

At this point you should be able to see the following event monitor tables created. Notice the *_db2activities table names which maintains activity type information and *_db2statistics table names which maintains aggregate performance data.

>db2 list tables for schema inst01

Table/View                      Schema          Type  Creation time
------------------------------- --------------- ----- --------------------------
ACTIVITYSTMT_DB2ACTIVITIES      db2inst01         T     2014-05-28-15.57.53.761735
ACTIVITYVALS_DB2ACTIVITIES      db2inst01         T     2014-05-28-15.57.53.797127
ACTIVITY_DB2ACTIVITIES          db2inst01         T     2014-05-28-15.57.53.729954
CONTROL_DB2ACTIVITIES           db2inst01         T     2014-05-28-15.57.53.719138
CONTROL_DB2STATISTICS           db2inst01         T     2014-05-28-15.57.53.826357
HISTOGRAMBIN_DB2STATISTICS      db2inst01         T     2014-05-28-15.57.53.915142
QSTATS_DB2STATISTICS            db2inst01         T     2014-05-28-15.57.53.903199
SCSTATS_DB2STATISTICS           db2inst01         T     2014-05-28-15.57.53.836101
WCSTATS_DB2STATISTICS           db2inst01         T     2014-05-28-15.57.53.862970
WLSTATS_DB2STATISTICS           db2inst01         T     2014-05-28-15.57.53.873555

Pruning data from WLM tables over time

There is really no utility or command to prune data from the WLM tables. You would need to create your own delete statements to prune data from these tables.

This article covered the steps involved in implementing and setting up a WLM monitoring strategy. In the final article we shall look at some interesting performance reports and histograms pulled from this configuration.

Abhik_Headshot
Abhik Roy is currently associated with Experian. He has over 10 years of consulting experience in the database technology space with strong focus of big data technologies and parallel processing architectures. His current assignments involve DB2 on distributed, Netezza and Mongo and he is passionate about anything related to databases. He also has a passion for photography and loves experimenting with his camera. He can be reached at roy.abhik@gmail.com

Abhik Roy is currently associated with Experian. He has over 10 years of consulting experience in the database technology space with strong focus of big data technologies and parallel processing architectures. His current assignments involve DB2 on distributed, Netezza and Mongo and he is passionate about anything related to databases. He also has a passion for photography and loves experimenting with his camera.

7 comments

  1. Hi Mr. Roy,

    I was reading your excellent series on WLM that I found on blog site yesterday and I enjoyed learning more about WLM and wanted to try and implement such a monitoring strategy using your example. I have a few questions: In item# 18 – Create Service Classes, Service Subclasses and Workload, I think you omitted adding the DDL for creating the workload(s)? or am I missing something?
    Could you possibly update the post with the ddl to create the workload for the example that you used in the post?

    Also, I have searched for part 3 of you series but could not find it. Could please provide a link to the DB2 Workload Manager (WLM) as a Monitoring Solution– Analyzing WLM Information (Part 3 of 3)?

    Finally, when we talk about DBACTIVITIES, this includes when the activity maybe loading data to the tables, in short any type of utility activity on the database, correct?

    Again, thank you for an excellent technical series worth reading!. Actually, just about everything that you get from Ember Crooks’s blog is worth reading!!

    Thanks
    Phillips

  2. Hello Phillips

    I am so glad you found my article useful. Yes you were correct, I missed the create workload statement. I have added it now.

    Part 3 is not yet out, it should be available within the next 3-4 weeks.

    Yes, the db2activities coulld be any type of database activities getting generated through the targeted workload.

    I hope I answered your questions.

    Cheers

    Abhik

    1. Abhik,

      Thanks a million for the response to my questions regarding DB2 Workload Manager (WLM) as a Monitoring Solution– How to Set up WLM (Part 2 of 3). I look forward to the next installment in this series…

      Phillips

  3. Hi Abhik,

    Thanks for the detailed article. Am just curious to know if the third part -DB2 Workload Manager (WLM) as a Monitoring Solution– Analyzing WLM Information was never released ? Could you please send me a link to the article ?

  4. This sentence is not correct:
    db2 update db cfg for testdb using grep WLM_COLLECT_INT 240
    It should be
    db2 update db cfg for testdb using WLM_COLLECT_INT 240

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.