DB2 Administrative SQL Cookbook: Listing the Columns of a Table in a Comma Separated List

I mentioned this SQL in my article on LISTAGG, but since I made use of it myself recently, I thought it would be worth listing in my DB2 Administrative SQL Cookbook series.

Purpose

To list the columns of a table in a comma separated list. This is particularly useful if you have a large table, and need to list all columns explicitly or need to change only one or two columns in an export statement. This statement works for a large number of columns (186 in the case of my example), and orders the columns by their column number in the table. The table used in this example has essentially the same structure as mon_get_bufferpool, with a couple of extra columns added.

Version

This SQL only works on DB2 9.7 and up. It is harder before then due to the lack of the LISTAGG function.

Statement

select listagg(cast(colname as varchar(10000)), ', ') within group (order by colno) as columns 
from syscat.columns 
where   tabschema='VDBA' 
        and tabname='DB2_BUFFERPOOL' 
with ur;

Sample Output

BP_NAME, MEMBER, DBNAME, AUTOMATIC, DIRECT_READS, DIRECT_READ_REQS, DIRECT_WRITES, DIRECT_WRITE_REQS, POOL_DATA_L_READS, POOL_TEMP_DATA_L_READS, POOL_XDA_L_READS, POOL_TEMP_XDA_L_READS, POOL_INDEX_L_READS, POOL_TEMP_INDEX_L_READS, POOL_DATA_P_READS, POOL_TEMP_DATA_P_READS, POOL_XDA_P_READS, POOL_TEMP_XDA_P_READS, POOL_INDEX_P_READS, POOL_TEMP_INDEX_P_READS, POOL_DATA_WRITES, POOL_XDA_WRITES, POOL_INDEX_WRITES, DIRECT_READ_TIME, DIRECT_WRITE_TIME, POOL_READ_TIME, POOL_WRITE_TIME, POOL_ASYNC_DATA_READS, POOL_ASYNC_DATA_READ_REQS, POOL_ASYNC_DATA_WRITES, POOL_ASYNC_INDEX_READS, POOL_ASYNC_INDEX_READ_REQS, POOL_ASYNC_INDEX_WRITES, POOL_ASYNC_XDA_READS, POOL_ASYNC_XDA_READ_REQS, POOL_ASYNC_XDA_WRITES, POOL_NO_VICTIM_BUFFER, POOL_LSN_GAP_CLNS, POOL_DRTY_PG_STEAL_CLNS, POOL_DRTY_PG_THRSH_CLNS, VECTORED_IOS, PAGES_FROM_VECTORED_IOS, BLOCK_IOS, PAGES_FROM_BLOCK_IOS, UNREAD_PREFETCH_PAGES, FILES_CLOSED, POOL_DATA_GBP_L_READS, POOL_DATA_GBP_P_READS, POOL_DATA_LBP_PAGES_FOUND, POOL_DATA_GBP_INVALID_PAGES, POOL_INDEX_GBP_L_READS, POOL_INDEX_GBP_P_READS, POOL_INDEX_LBP_PAGES_FOUND, POOL_INDEX_GBP_INVALID_PAGES, POOL_ASYNC_DATA_GBP_L_READS, POOL_ASYNC_DATA_GBP_P_READS, POOL_ASYNC_DATA_LBP_PAGES_FOUND, POOL_ASYNC_DATA_GBP_INVALID_PAGES, POOL_ASYNC_INDEX_GBP_L_READS, POOL_ASYNC_INDEX_GBP_P_READS, POOL_ASYNC_INDEX_LBP_PAGES_FOUND, POOL_ASYNC_INDEX_GBP_INVALID_PAGES, POOL_XDA_GBP_L_READS, POOL_XDA_GBP_P_READS, POOL_XDA_LBP_PAGES_FOUND, POOL_XDA_GBP_INVALID_PAGES, POOL_ASYNC_XDA_GBP_L_READS, POOL_ASYNC_XDA_GBP_P_READS, POOL_ASYNC_XDA_LBP_PAGES_FOUND, POOL_ASYNC_XDA_GBP_INVALID_PAGES, POOL_ASYNC_READ_TIME, POOL_ASYNC_WRITE_TIME, BP_CUR_BUFFSZ, POOL_QUEUED_ASYNC_DATA_REQS, POOL_QUEUED_ASYNC_INDEX_REQS, POOL_QUEUED_ASYNC_XDA_REQS, POOL_QUEUED_ASYNC_TEMP_DATA_REQS, POOL_QUEUED_ASYNC_TEMP_INDEX_REQS, POOL_QUEUED_ASYNC_TEMP_XDA_REQS, POOL_QUEUED_ASYNC_OTHER_REQS, POOL_QUEUED_ASYNC_DATA_PAGES, POOL_QUEUED_ASYNC_INDEX_PAGES, POOL_QUEUED_ASYNC_XDA_PAGES, POOL_QUEUED_ASYNC_TEMP_DATA_PAGES, POOL_QUEUED_ASYNC_TEMP_INDEX_PAGES, POOL_QUEUED_ASYNC_TEMP_XDA_PAGES, POOL_FAILED_ASYNC_DATA_REQS, POOL_FAILED_ASYNC_INDEX_REQS, POOL_FAILED_ASYNC_XDA_REQS, POOL_FAILED_ASYNC_TEMP_DATA_REQS, POOL_FAILED_ASYNC_TEMP_INDEX_REQS, POOL_FAILED_ASYNC_TEMP_XDA_REQS, POOL_FAILED_ASYNC_OTHER_REQS, SKIPPED_PREFETCH_DATA_P_READS, SKIPPED_PREFETCH_INDEX_P_READS, SKIPPED_PREFETCH_XDA_P_READS, SKIPPED_PREFETCH_TEMP_DATA_P_READS, SKIPPED_PREFETCH_TEMP_INDEX_P_READS, SKIPPED_PREFETCH_TEMP_XDA_P_READS, SKIPPED_PREFETCH_UOW_DATA_P_READS, SKIPPED_PREFETCH_UOW_INDEX_P_READS, SKIPPED_PREFETCH_UOW_XDA_P_READS, SKIPPED_PREFETCH_UOW_TEMP_DATA_P_READS, SKIPPED_PREFETCH_UOW_TEMP_INDEX_P_READS, SKIPPED_PREFETCH_UOW_TEMP_XDA_P_READS, PREFETCH_WAIT_TIME, PREFETCH_WAITS, POOL_DATA_GBP_INDEP_PAGES_FOUND_IN_LBP, POOL_INDEX_GBP_INDEP_PAGES_FOUND_IN_LBP, POOL_XDA_GBP_INDEP_PAGES_FOUND_IN_LBP, POOL_ASYNC_DATA_GBP_INDEP_PAGES_FOUND_IN_LBP, POOL_ASYNC_INDEX_GBP_INDEP_PAGES_FOUND_IN_LBP, POOL_ASYNC_XDA_GBP_INDEP_PAGES_FOUND_IN_LBP, POOL_COL_L_READS, POOL_TEMP_COL_L_READS, POOL_COL_P_READS, POOL_TEMP_COL_P_READS, POOL_COL_LBP_PAGES_FOUND, POOL_COL_WRITES, POOL_ASYNC_COL_READS, POOL_ASYNC_COL_READ_REQS, POOL_ASYNC_COL_WRITES, POOL_ASYNC_COL_LBP_PAGES_FOUND, POOL_COL_GBP_L_READS, POOL_COL_GBP_P_READS, POOL_COL_GBP_INVALID_PAGES, POOL_COL_GBP_INDEP_PAGES_FOUND_IN_LBP, POOL_ASYNC_COL_GBP_L_READS, POOL_ASYNC_COL_GBP_P_READS, POOL_ASYNC_COL_GBP_INVALID_PAGES, POOL_ASYNC_COL_GBP_INDEP_PAGES_FOUND_IN_LBP, POOL_QUEUED_ASYNC_COL_REQS, POOL_QUEUED_ASYNC_TEMP_COL_REQS, POOL_QUEUED_ASYNC_COL_PAGES, POOL_QUEUED_ASYNC_TEMP_COL_PAGES, POOL_FAILED_ASYNC_COL_REQS, POOL_FAILED_ASYNC_TEMP_COL_REQS, SKIPPED_PREFETCH_COL_P_READS, SKIPPED_PREFETCH_TEMP_COL_P_READS, SKIPPED_PREFETCH_UOW_COL_P_READS, SKIPPED_PREFETCH_UOW_TEMP_COL_P_READS, BP_PAGES_LEFT_TO_REMOVE, BP_TBSP_USE_COUNT, POOL_DATA_CACHING_TIER_L_READS, POOL_INDEX_CACHING_TIER_L_READS, POOL_XDA_CACHING_TIER_L_READS, POOL_COL_CACHING_TIER_L_READS, POOL_DATA_CACHING_TIER_PAGE_WRITES, POOL_INDEX_CACHING_TIER_PAGE_WRITES, POOL_XDA_CACHING_TIER_PAGE_WRITES, POOL_COL_CACHING_TIER_PAGE_WRITES, POOL_DATA_CACHING_TIER_PAGE_UPDATES, POOL_INDEX_CACHING_TIER_PAGE_UPDATES, POOL_XDA_CACHING_TIER_PAGE_UPDATES, POOL_COL_CACHING_TIER_PAGE_UPDATES, POOL_CACHING_TIER_PAGE_READ_TIME, POOL_CACHING_TIER_PAGE_WRITE_TIME, POOL_DATA_CACHING_TIER_PAGES_FOUND, POOL_INDEX_CACHING_TIER_PAGES_FOUND, POOL_XDA_CACHING_TIER_PAGES_FOUND, POOL_COL_CACHING_TIER_PAGES_FOUND, POOL_DATA_CACHING_TIER_GBP_INVALID_PAGES, POOL_INDEX_CACHING_TIER_GBP_INVALID_PAGES, POOL_XDA_CACHING_TIER_GBP_INVALID_PAGES, POOL_COL_CACHING_TIER_GBP_INVALID_PAGES, POOL_DATA_CACHING_TIER_GBP_INDEP_PAGES_FOUND, POOL_INDEX_CACHING_TIER_GBP_INDEP_PAGES_FOUND, POOL_XDA_CACHING_TIER_GBP_INDEP_PAGES_FOUND, POOL_COL_CACHING_TIER_GBP_INDEP_PAGES_FOUND, POOL_ASYNC_DATA_CACHING_TIER_READS, POOL_ASYNC_INDEX_CACHING_TIER_READS, POOL_ASYNC_XDA_CACHING_TIER_READS, POOL_ASYNC_COL_CACHING_TIER_READS, POOL_ASYNC_DATA_CACHING_TIER_PAGE_WRITES, POOL_ASYNC_INDEX_CACHING_TIER_PAGE_WRITES, POOL_ASYNC_XDA_CACHING_TIER_PAGE_WRITES, POOL_ASYNC_COL_CACHING_TIER_PAGE_WRITES, POOL_ASYNC_DATA_CACHING_TIER_PAGE_UPDATES, POOL_ASYNC_INDEX_CACHING_TIER_PAGE_UPDATES, POOL_ASYNC_XDA_CACHING_TIER_PAGE_UPDATES, POOL_ASYNC_COL_CACHING_TIER_PAGE_UPDATES, POOL_ASYNC_DATA_CACHING_TIER_PAGES_FOUND, POOL_ASYNC_INDEX_CACHING_TIER_PAGES_FOUND, POOL_ASYNC_XDA_CACHING_TIER_PAGES_FOUND, POOL_ASYNC_COL_CACHING_TIER_PAGES_FOUND, POOL_ASYNC_DATA_CACHING_TIER_GBP_INVALID_PAGES, POOL_ASYNC_INDEX_CACHING_TIER_GBP_INVALID_PAGES, POOL_ASYNC_XDA_CACHING_TIER_GBP_INVALID_PAGES, POOL_ASYNC_COL_CACHING_TIER_GBP_INVALID_PAGES, POOL_ASYNC_DATA_CACHING_TIER_GBP_INDEP_PAGES_FOUND, POOL_ASYNC_INDEX_CACHING_TIER_GBP_INDEP_PAGES_FOUND, POOL_ASYNC_XDA_CACHING_TIER_GBP_INDEP_PAGES_FOUND, POOL_ASYNC_COL_CACHING_TIER_GBP_INDEP_PAGES_FOUND, BASELINE_TIMESTAMP, TIMESTAMP, MAIL_ID 
Ember Crooks
Ember Crooks

Ember is always curious and thrives on change. She has built internationally recognized expertise in IBM Db2, spent a year working with high-volume MySQL, and is now learning Snowflake. Ember shares both posts about her core skill sets and her journey learning Snowflake.

Ember lives in Denver and work from home

Articles: 545

7 Comments

  1. Since you are covering LISTAGG, thought you might like an XMLAGG as well. The below will take a string and break it up into chunks of a specified length to be inserted to a table as individual rows. The view will show it as one long string. I got tired of apps asking to make a comment string larger. Now they can put whatever size comment they want and I never have to alter a table again.

    drop view comment_tab_v;
    drop table comment_tab;

    create table comment_tab (key int not null, seq int not null, chunk varchar(10), primary key(key, seq));

    commit;

    create view comment_tab_v (key, str) as
    (select key, XMLSERIALIZE(
    XMLAGG( XMLTEXT( chunk))AS CLOB(4096000) ) as str
    from comment_tab
    group by key);

    create trigger comment_tab_v_iir
    instead of insert
    on comment_tab_v
    referencing new as n
    for each row
    insert into comment_tab
    with a(seq, start_pos, len) as (
    select 1, 1, min(length(n.str), 10) from sysibm.sysdummy1
    union all
    select a.seq+1, a.start_pos + a.len, min(length(n.str) – (a.start_pos + a.len) + 1, 10)
    from a
    where min(length(n.str) – (a.start_pos + a.len) + 1, 10) > 0
    )
    select n.key, a.seq, substr(n.str, a.start_pos, a.len) chunk
    from a;

    –#SET TERMINATOR ?

    create trigger comment_tab_v_uir
    instead of update
    on comment_tab_v
    referencing new as n
    old as o
    for each row MODE DB2SQL
    begin atomic
    delete from comment_tab
    where o.key = key;
    insert into comment_tab
    with a(seq, start_pos, len) as (
    select 1, 1, min(length(n.str), 10) from sysibm.sysdummy1
    union all
    select a.seq+1, a.start_pos + a.len, min(length(n.str) – (a.start_pos + a.len) + 1, 10)
    from a
    where min(length(n.str) – (a.start_pos + a.len) + 1, 10) > 0
    )
    select n.key, a.seq, substr(n.str, a.start_pos, a.len) chunk
    from a;

    end?

    –#SET TERMINATOR ;
    create trigger comment_tab_v_dir
    instead of delete
    on comment_tab_v
    referencing old as o
    for each row
    delete from comment_tab
    where o.key = key;

    insert into comment_tab_v values (2, ‘this is a test of the new clob size of 4096000 to see if it works as well as 4000.’);

    update comment_tab_v
    set str = ‘this is an updated test of the new clob size of 4096000 to see if it works as well as 4000. If it does we are looking to create the comment table on LUW and federate to it from the mainframe using a three part name.’
    where key =2;

    –delete from comment_tab_v
    –where key = 2;

    select * from comment_tab_v where key=2;
    select * from comment_tab where key=2;

    • As I understand it, XMLAGG became available first. It is probably the better option for strings larger than 32,768 bytes. I see your reasoning for this approach, but I’m not sure it would be my choice. I’d have to test it against other options with a volume of data to understand whether this is better or just taking the hit on administration time for properly sized columns.

  2. Doesn’t work on DB2 9.7 FP 7

    G:\db2\db2per\db2_software\BIN\dennis>db2 -mtvf list_col.sql
    select listagg(cast(colname as varchar(10000)), ‘, ‘) within group (order by colno) as columns from syscat.columns where tabschema=’SAPPER’
    and tabname=’VBPA’ with ur
    SQL0440N No authorized routine named “LISTAGG” of type “FUNCTION” having
    compatible arguments was found. SQLSTATE=42884

    G:\db2\db2per\db2_software\BIN\dennis>db2level
    DB21085I This instance or install (instance name, where applicable: “DB2PER”)
    uses “64” bits and DB2 code release “SQL09077” with level identifier
    “08080107”.
    Informational tokens are “DB2 v9.7.700.552”, “s121002”, “IP23369”, and Fix Pack
    “7”.
    Product is installed at “G:\db2\db2per\db2_software” with DB2 Copy Name
    “SAPDB2PER”.

    G:\db2\db2per\db2_software\BIN\dennis>

  3. Hi Ember,

    I recently found myself in this kind of need. Just sharing another approach using recursive common table expression:

    with cte ( schema , table, colno, list_of_columns )
    as
    (
    select rtrim(base.tabschema), rtrim(base.tabname), base.colno, cast(rtrim(base.colname) as clob)
    from syscat.columns base
    where base.tabschema = ‘DB2INST1’ and base.tabname = ‘EMPLOYEE’ and base.colno = 0

    union all

    select rtrim(t1.tabschema), rtrim(t1.tabname), t1.colno, cast(rtrim(t0.list_of_columns) as clob) || ‘, ‘ || cast(rtrim(t1.colname) as clob)
    from cte t0, syscat.columns t1
    where t0.schema = t1.tabschema and t0.table = t1.tabname
    and t1.colno = t0.colno + 1
    )
    select schema, colno, cast(list_of_columns as clob) from cte

    SCHEMA COLNO LIST_OF_COLUMNS
    ——- —– ——————————————————————————————————————
    DB2INST1 0 EMPNO
    DB2INST1 1 EMPNO, FIRSTNME
    DB2INST1 2 EMPNO, FIRSTNME, MIDINIT
    DB2INST1 3 EMPNO, FIRSTNME, MIDINIT, LASTNAME
    DB2INST1 4 EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT
    DB2INST1 5 EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT, PHONENO
    DB2INST1 6 EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT, PHONENO, HIREDATE
    DB2INST1 7 EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT, PHONENO, HIREDATE, JOB
    DB2INST1 8 EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT, PHONENO, HIREDATE, JOB, EDLEVEL
    DB2INST1 9 EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT, PHONENO, HIREDATE, JOB, EDLEVEL, SEX
    DB2INST1 10 EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT, PHONENO, HIREDATE, JOB, EDLEVEL, SEX, BIRTHDATE
    DB2INST1 11 EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT, PHONENO, HIREDATE, JOB, EDLEVEL, SEX, BIRTHDATE, SALARY
    DB2INST1 12 EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT, PHONENO, HIREDATE, JOB, EDLEVEL, SEX, BIRTHDATE, SALARY, BONUS
    DB2INST1 13 EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT, PHONENO, HIREDATE, JOB, EDLEVEL, SEX, BIRTHDATE, SALARY, BONUS, COMM

    Regards,

    Luiz

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.