Working with BLU has been a glorious learning experience for me. I say that both truly with excitement and with frustration. Whenever I have a DBA-character building experience, I’m bound to share it with my readers.
A client has a small (1.7 TB) DPF data warehouse/ODS than I’m helping them move from 9.5 to 10.5 using BLU. Only a small subset of the data needs to be moved. I started trying to load from pipe, thinking that would be the fastest, but learned a lot about how LOAD works for column-organized tables.
The Phases of LOAD for Column-Organized Tables
There are four major phases of LOAD for column-organized tables:
On my first large table, I found that ANALYZE took 2.5 hours, LOAD took 20 minutes, BUILD took 8 minutes. I was worried at the ANALYZE time. After a lot of online searching turned up precious few real pieces of information, I reached out to some friends who work for big blue to better understand what was going on.
The analyze phase is critical. This is where compression dictionaries and BLU synopsis tables are built. I found references to shortening this phase by building a dictionary based on a subset of the data. But the magic of BLU is strongly dependent on really awesome compression. And really awesome compression comes from having the best possible data.
This phase is only triggered if:
- Dictionaries are needed for column-organized tables
- Load Replace or Load Insert into empty table is used
- KEEPDICTIONARY not specified
What DB2 is doing during the ANALYZE phase is building histograms to track the frequency of data values across all columns. The column compression dictionaries using approximate Huffman encoding (most frequent values use the least space) are built off of the histograms. For compiling this information, it is absolutely critical that the UTILHEAPSZ is gigantic. You can use automatic, but keep in mind that in the minimum recommended size environment (8 cores, 64 GB of memory), it is recommended that UTILHEAPSZ be one million pages in size. If you have 128 GB of memory, then UTILHEAPSZ should be four million pages in size.
Additionally, if you are loading from a source that can only be scanned once, DB2 needs to scan that data twice – once for the ANALYZE phase and once for the LOAD phase. This means for pipes or other sources that can only be scanned once, DB2 will create a copy of the data on the TEMPFILES PATH specified on the load command. This can be time consuming and use a lot of disk space.
The LOAD phase is modified for column organized tables. The following occurs for column-organized tables in the load phase:
- Column and page compression dictionaries (existing or built in the ANALYZE phase) are used to compress data
- Compressed values are written to data pages
- The Synopsis table is maintained
- Keys are built for the page map index and any unique indexes
For column-organized tables, the BUILD phase includes building the page map index and any unique indexes. Because this is BLU, no indexes other than Primary Key or Unique Constraint indexes are allowed, so this phase should be faster than in a traditional row-organized scenario.
In addition to the traditional role of deleting any rows that are rejected because of the Primary Key or Unique Constraints, the DELETE phase for BLU tables includes deleting any temp files that were used to be able to scan through the data twice.
Real World Experimentation
I don’t pretend that my tests are exhaustive, but I primarily wanted to compare – all other things being equal for a random table selected, how did the speed of the overall operation along with the speed of the individual phases compare when loading into a brand-new column organized table in a fairly new BLU database that has no other activity against it.
The table I’m using has CHAR, DATE, TIMESTAMP, and INTEGER data types, no LOBs and no Primary Key or unique constraints. It has 64371541 rows in the source database, spread across 4 data partitions on one physical server. I’m dropping and re-creating the table between each test to ensure that I don’t have anything hanging around from the last try.
Load from Pipe
The process for load from pipe is not difficult, but requires that the target database server can connect to the source database server. It also requires two sessions. The general steps are:
- (session1)Connect to the source database from the server where the target database is
- (session1)Make a pipe using the mkfifo command `mkfifo datapipe1`
- (session1)Export to the pipe created `db2 “export to datapipe1 of del messages test.msg select * from schema.table with ur”`
- (session2)Connect to the target database locally
- (session2)Load from the pipe created ` db2 “load from datapipe1 of del messages test_load.msg replace into schema.table NONRECOVERABLE”`
- (session3)Use `db2pd -utilities -repeat 5` to monitor the load progress
The db2pd -utilities command is good for monitoring because you can see progress and the time that each phase starts. If you’re expecting a longer load, you may want to go with a longer refresh interval than 5 seconds.
Here’s the output from the end of my test scenario:
Utilities: Address ID Type State Invoker Priority StartTime DBName NumPhases CurPhase Description 0x000000020428F2E0 8 LOAD 0 0 0 Wed Jan 7 18:52:27 SAMPLE 4 4 [LOADID: 126.2015-01-07-188.8.131.522240.0 (8;4)] [*LOCAL.db2inst1.NNNNNNNNNNNN] OFFLINE LOAD DEL AUTOMATIC INDEXING REPLACE NON-RECOVERABLE SCHEMA.TABLE1 Progress: Address ID PhaseNum CompletedWork TotalWork StartTime Description 0x000000020428F668 8 1 0 bytes 0 bytes Wed Jan 7 18:52:27 SETUPo 0x000000020428F820 8 2 64371541 rows 64371541 rows Wed Jan 7 18:52:27 ANALYZEl 0x000000020428F9A8 8 3 64371541 rows 64371541 rows Wed Jan 7 19:18:54 LOADm 0x000000020428FB30 8 4 2 indexes 2 indexes Wed Jan 7 19:41:55 BUILD Database Member 0 -- Active -- Up 1 days 06:37:54 -- Date 2015-01-07-19.42.04.469122
The export command finishes at the end of the ANALYZE phase.
Note: I watched the load on a 5-second interval using the -repeat option of db2pd. There were times in the ANALYZE phase where the load seemed to hang without advancing the number of rows for several minutes at a time – then it would jump forward – so if you see this, don’t panic. Interestingly, I saw similar pauses in the LOAD phase at exactly the same numbers. 15604493 rows for some reason, and a couple of other points.
The results for this method are a total export/load time of 50 minutes – the analyze phase took about 26 minutes, the LOAD phase about 23 minutes, and the build phase about a minute. There was no delete phase.
Load from Cursor
Load from cursor for a remote source is not as hard as I had expected. There is no need to federate the source database. There are good instructions on how to load from cursor here: http://www.ibm.com/developerworks/data/library/techarticle/dm-0901fechner/
There are essentially three steps once the source database is cataloged on the server of the target database:
- On target server, connect to target database
- Declare cursor against the source database: `db2 “DECLARE C1 CURSOR DATABASE sourcedb user uname using pw for select * from schema.table with ur”`
- Load from cursor: `db2 “LOAD FROM C1 of CURSOR MESSAGES test_load.msg replace into schema.table nonrecoverable”`
And as for the results, all I can say is “Whoa, that was fast!”
It looked like this:
Utilities: Address ID Type State Invoker Priority StartTime DBName NumPhases CurPhase Description 0x000000020373EF80 10 LOAD 0 0 0 Wed Jan 7 20:48:53 SAMPLE 4 4 [LOADID: 335.2015-01-07-20.48.53.091244.0 (8;4)] [*LOCAL.db2inst1.NNNNNNNNNNNN] OFFLINE LOAD CURSOR (DB) AUTOMATIC INDEXING REPLACE NON-RECOVERABLE SCHEMA.TABLE1 Progress: Address ID PhaseNum CompletedWork TotalWork StartTime Description 0x000000020373F308 10 1 0 bytes 0 bytes Wed Jan 7 20:48:53 SETUPo 0x000000020373F4C0 10 2 64371541 rows 64371541 rows Wed Jan 7 20:48:59 ANALYZEl 0x000000020373F648 10 3 64371541 rows 64371541 rows Wed Jan 7 20:57:51 LOADm 0x000000020373F7D0 10 4 2 indexes 2 indexes Wed Jan 7 21:00:21 BUILD Database Member 0 -- Active -- Up 1 days 07:56:18 -- Date 2015-01-07-21.00.28.254954
In this case, the ANALYZE phase took 9 minutes, the LOAD took 3 minutes, and the BUILD phase took less than a minute, for a grand total of 12 minutes. That’s 1/4 the time the exact same data and the exact same table took with load from pipe.
Load from File
Finally, I thought I would try the file method as well. In this case, I’ll connect to the source database from the target database, export to a file on the target database server, and then load from that file on the target database server.
Export took 10 minutes:
-bash-4.1$ date; db2 "export to schema_table1.del of del messages manual_export_test.msg select * from schema.table1 with ur"; date Wed Jan 7 21:13:14 PST 2015 Number of rows exported: 64371541 Wed Jan 7 21:23:53 PST 2015
The Load portion was faster, particularly the ANALYZE phase
Utilities: Address ID Type State Invoker Priority StartTime DBName NumPhases CurPhase Description 0x000000020477F100 11 LOAD 0 0 0 Wed Jan 7 21:27:24 SAMPLE 4 3 [LOADID: 329.2015-01-07-21.27.24.015656.0 (8;4)] [*LOCAL.db2inst1.NNNNNNNNNNNN] OFFLINE LOAD DEL AUTOMATIC INDEXING REPLACE NON-RECOVERABLE SCHEMA.TABLE1 Progress: Address ID PhaseNum CompletedWork TotalWork StartTime Description 0x000000020477F488 11 1 0 bytes 0 bytes Wed Jan 7 21:27:24 SETUPo 0x000000020477F640 11 2 64371541 rows 64371541 rows Wed Jan 7 21:27:24 ANALYZEl 0x000000020477F7C8 11 3 63865512 rows 63865512 rows Wed Jan 7 21:28:04 LOADm 0x000000020477F950 11 4 0 indexes 2 indexes NotStarted BUILD Database Member 0 -- Active -- Up 1 days 08:26:30 -- Date 2015-01-07-184.108.40.2069176
The analyze phase took about 1.5 minutes, the load phase took about 2 minutes, and the build phase was less than 5 seconds.
That’s a total of about thirteen minutes
In this case, load from file was faster than load from pipe and slightly slower than load from cursor.
Note that the count of indexes that LOAD notes is 2. But I have no explicitly created indexes (primary keys or unique constraints). One of the indexes it is building is the page map index for the table. I’m not sure what the other index is, but in my work, it consistently lists two indexes more than anything I have explicitly defined.
I wanted to make sure that there wasn’t some factor I wasn’t accounting for making the extreme difference for load from pipe. I though maybe pulling pages into bufferpool on the source server or something, so I did a different table with 97,110,306 rows and a primary key. In this case, I did the export to file method first and the pipe one second to compare.
For the export to file, it took 17 minute to export – this is connecting from the target server to the source server, so that includes network time. In the output below, you can see that the load itself took 2 minutes in the ANALYZE phase, 10 minutes in the LOAD phase, and 2 minutes in the BUILD phase for a total of 14 minutes.
Utilities: Address ID Type State Invoker Priority StartTime DBName NumPhases CurPhase Description 0x00000002036CEE00 12 LOAD 0 0 0 Thu Jan 8 06:05:25 SAMPLE 4 4 [LOADID: 482.2015-01-08-06.05.25.336041.0 (20;4)] [*LOCAL.db2inst1.NNNNNNNNNNNN] OFFLINE LOAD DEL AUTOMATIC INDEXING REPLACE NON-RECOVERABLE DBA .TEMP_TABLE2 Progress: Address ID PhaseNum CompletedWork TotalWork StartTime Description 0x00000002036CF188 12 1 0 bytes 0 bytes Thu Jan 8 06:05:25 SETUPo 0x00000002036CF340 12 2 97110306 rows 97110306 rows Thu Jan 8 06:05:25 ANALYZEl 0x00000002036CF4C8 12 3 97110306 rows 97110306 rows Thu Jan 8 06:07:32 LOADm 0x00000002036CF650 12 4 0 indexes 3 indexes Thu Jan 8 06:17:12 BUILD Database Member 0 -- Active -- Up 1 days 17:14:58 -- Date 2015-01-08-06.19.08.353244
The load using pipe data is below. The ANALYZE phase took 28 minutes, the LOAD phase took 13 minutes, and the BUILD phase took about a minute, for a total of 42 minutes – about twice as long, which is consistent with what I saw with the previous table in the previous experiment.
Utilities: Address ID Type State Invoker Priority StartTime DBName NumPhases CurPhase Description 0x00000002023DF5E0 13 LOAD 0 0 0 Thu Jan 8 06:30:53 SAMPLE 4 4 [LOADID: 482.2015-01-08-06.30.53.492253.0 (10;18)] [*LOCAL.db2inst1.NNNNNNNNNNNN] OFFLINE LOAD DEL AUTOMATIC INDEXING REPLACE NON-RECOVERABLE DBA .TEMP_TABLE2 Progress: Address ID PhaseNum CompletedWork TotalWork StartTime Description 0x00000002023DF968 13 1 0 bytes 0 bytes Thu Jan 8 06:30:53 SETUPo 0x00000002023DFB20 13 2 97110306 rows 97110306 rows Thu Jan 8 06:30:53 ANALYZEl 0x00000002023DFCA8 13 3 97110306 rows 97110306 rows Thu Jan 8 06:58:13 LOADm 0x00000002023DFE30 13 4 3 indexes 3 indexes Thu Jan 8 07:07:00 BUILD
Be careful not to over extrapolate from my little test cases. So many factors can make a difference, and there may be different strategies that work better in different scenarios. Overall, I plan to use load from cursor the most. It is the easiest and the least likely that you’ll connect to the wrong database by accident. It’s easy to quickly script. I do wonder if for particularly large tables, load from file might be the best.