We DBAs are lazy in some ways. Our favorite way of moving large amounts of data is the backup utility. It’s so easy to move data, structure, objects, permissions, etc. But as quick, easy, and efficient as it is, it cannot handle some situations – like skip-level upgrades, data movement between different operating systems, and of course between different RDBMSes.
There are other data movement options, but I thought I’d share what I’ve learned recently about the IBM Data Movement Tool.
What the IBM Data Movement Tool is
It looks to me as though this tool was originally created by IBM to make moving data from competitors’ RDBMSes into DB2 easy. My guess is that someone co-opted it for some other scenario and the uses have been expanded for it. What the tool essentially does is move definitions and data from one database to another. It’s a java based tool with command line and GUI interfaces. It works on Windows/UNIX/Linux.
I don’t like GUIs very well, and to go with that, my X-fu isn’t all that great. So I went for using this tool using the command line way.
First, you have to download the tool, as described at: http://www.ibm.com/developerworks/data/library/techarticle/dm-0906datamovement/#download. It’s basically a zip file. You’ll uncompress it and move the files to the server where you wish to run it. For the best results, it should be local on the target database server.
After you have the files in a dedicated directory, you run the setup utility Using this syntax:
If you’ve got X-windows all set up, it’ll pop up the gui version, or it will enter the console version. The console (command-line) version asks a bunch of questions. If you get one wrong, you can simply re-execute it to regenerate the values. The questions it asks look like this (long, scroll to see it all)
./IBMDataMovementTool.sh Switching to Console Mode of operation Specify output directory (Default=/db2home/db2bcup/IBMDataMovementTool/migr) : /db2backup/db2bcup/pureApp_move/IBMDataMovementTool/migr You entered '/db2backup/db2bcup/pureApp_move/IBMDataMovementTool/migr' Press 1 to re-enter or hit enter to continue : ******* Target database name: ***** DB2 LUW with compatibility : 1 DB2 LUW : 2 Netezza : 3 DB2 on z/OS : 4 Enter a number (Default 2) : 2 Your Target database is 'db2luw' Specify LOAD directory (Default=/db2backup/db2inst1/IBMDataMovementTool/migr) : Debug (Yes) : 1 Debug (No) : 2 Enter a number (Default=2) : Include NOROWWARNINGS in LOAD : 1 Do not add NOROWWARNINGS in LOAD : 2 Enter a number (Default=2) : Specify TEMPFILES Path for LOAD : IS TARGET DB2 LOCAL : 1 IS TARGET DB2 REMOTE : 2 Enter a number (Default=1) : 1 DUMP BAD DATA IN EXCEPTION TABLES (Yes) : 1 DUMP BAD DATA IN EXCEPTION TABLES (No) : 2 Enter a number (Default=1) : Enter exception schema suffix. (Default=_EXP) : Enter exception table suffix. (Default=ALL) : Extract DDL (Yes) : 1 Extract DDL (No) : 2 Enter a number (Default=2) : 2 Extract Data (Yes) : 1 Extract Data (No) : 2 Enter a number (Default=1) : 1 Extract Objects (Yes) : 1 Extract Objects (No) : 2 Enter a number (Default=2) : 2 Enter # of rows limit to extract. (Default=ALL) : Enter # of rows limit to load data. (Default=ALL) : Number of rows to check for length comparison for DataCheck program (Default=100) : # of rows extracted for which elapsed time is displayed (Default=0) : Number of work directories to extract data (Default=1) : Number of threads to extract data (Default=5) : Migrate clustered indexes in DB2 (Yes) : 1 Migrate clustered indexes in DB2 (No) : 2 Enter a number (Default=2) : Compress Table in DB2 (No) : 1 Compress Table in DB2 (YES) : 2 Enter a number (Default=2) : 1 Compress Index in DB2 (No) : 1 Compress Index in DB2 (YES) : 2 Enter a number (Default=2) : 1 ******* Source database information: ***** Oracle : 1 MS SQL Server : 2 Sybase : 3 MS Access Database : 4 MySQL : 5 PostgreSQL : 6 DB2 z/OS : 7 DB2 iSeries : 8 DB2 LUW : 9 Teradata : 10 Informix : 11 Netezza : 12 Enter a number (Default 8) : 9 Convert tab/col names to uppercase : 1 Retain tab/col names case sensitivity : 2 Enter a number (Default=2) : Convert time to timestamp (No) : 1 Convert time to timestamp (Yes) : 2 Enter a number (Default=2) : Enter encoding. (Default=UTF-8) : Enter SQL Scripts file encoding. (Default=UTF-8) : Number of data bytes to write in buffer before flushing to disk (Default=36000) : VARCHAR size beyond which you want VARCHAR to go as CLOB (Default=32768) : Generate Statistics during DB2 LOAD (Yes) : 1 Generate Statistics during DB2 LOAD (No) : 2 Enter a number (Default=1) : Enter source database Host Name or IP Address (Default=000.000.000.000) : Enter 000.000.000.000's port number (Default=50001) : Enter source Database name (Default=SAMPLE) : Enter User ID of source database (Default=dbuser) : Enter source database Passsword (Default=dbpass) : Verifying locations of db2luw's JDBC drivers JAR file '/db2home/db2bcup/sqllib/java/db2jcc.jar' found JAR file '/db2home/db2bcup/sqllib/java/db2jcc_license_cu.jar' found Now we will try to connect to the db2luw to extract the schema names. Do you want to continue (Yes) : 1 Quit program : 2 Enter a number (Default=1) : 1 [2015-01-02 18.51.05.249] Driver com.ibm.db2.jcc.DB2Driver loaded [2015-01-02 18.51.05.466] Database Product Name :DB2/LINUXX8664 [2015-01-02 18.51.05.466] Database Product Version :SQL09051 [2015-01-02 18.51.05.467] JDBC driver IBM Data Server Driver for JDBC and SQLJ Version = 3.69.24 [2015-01-02 18.51.05.467] Database Major Version :9 [2015-01-02 18.51.05.467] Database Minor Version :5 [2015-01-02 18.51.05.574] DB2 instance name is db2inst1 [2015-01-02 18.51.05.642] DB2 Compatibility params varchar2_compat= date_compat= number_compat= param04= [2015-01-02 18.51.05.945] *** WARNING ***. The DB2_COMPATIBILITY_VECTOR is not set. [2015-01-02 18.51.05.945] To set compatibility mode, discontinue this program and run the following commands [2015-01-02 18.51.05.946] For Oracle: [2015-01-02 18.51.05.946] db2set DB2_COMPATIBILITY_VECTOR=ORA [2015-01-02 18.51.05.946] For Sybase: [2015-01-02 18.51.05.946] db2set DB2_COMPATIBILITY_VECTOR=SYB [2015-01-02 18.51.05.947] db2stop force [2015-01-02 18.51.05.947] db2start [2015-01-02 18.51.05.947] DB2 Compatibility Vector=null [2015-01-02 18.51.05.951] Driver com.ibm.db2.jcc.DB2Driver loaded [2015-01-02 18.51.05.960] Database Product Name :DB2/LINUXX8664 [2015-01-02 18.51.05.960] Database Product Version :SQL09051 [2015-01-02 18.51.05.960] JDBC driver IBM Data Server Driver for JDBC and SQLJ Version = 3.69.24 [2015-01-02 18.51.05.961] Database Major Version :9 [2015-01-02 18.51.05.961] Database Minor Version :5 db2luw's schema List extracted=SCHEMA1:SCHEMA2:SCHEMA3:SCHEMAN: Your selected schema List =SCHEMA1 Do you want to use 'vdba' as schema : 1 Or the extracted List : 2 Or your selected list : 3 Enter a number (Default=1) : 3 Select your schema names (separator is :) (Default=CIS_ODS) : [2015-01-02 18.104.22.168] CustDataMapPropFile 0 values loaded from the properties file /db2backup/db2inst1/IBMDataMovementTool/migr/CustomDataMapping.properties [2015-01-02 22.214.171.124] CustNullPropFile 0 values loaded from the properties file /db2backup/db2inst1/IBMDataMovementTool/migr/CustomNullMapping.properties [2015-01-02 126.96.36.199] CustColNamePropFile 0 values loaded from the properties file /db2backup/db2inst1/IBMDataMovementTool/migr/CustomColNameMapping.properties [2015-01-02 188.8.131.52] Driver com.ibm.db2.jcc.DB2Driver loaded [2015-01-02 184.108.40.206] Database Product Name :DB2/LINUXX8664 [2015-01-02 220.127.116.11] Database Product Version :SQL09051 [2015-01-02 18.104.22.168] JDBC driver IBM Data Server Driver for JDBC and SQLJ Version = 3.69.24 [2015-01-02 22.214.171.124] Database Major Version :9 [2015-01-02 126.96.36.199] Database Minor Version :5 [2015-01-02 188.8.131.52] CreateTableScript Database Major Version :9 [2015-01-02 184.108.40.206] #### Comments begin with the # sign at the beginning of the line at col 1. #### This file is an input to the unload command. The format of this file is: ####
. : #### "DMT"."EMP":SELECT * FROM "DMPTEST"."EMP" #### The DMPTEST.EMP table will migrate to DMT.EMP table #### Edit this file manually to change the target schema name. #### or, You can edit dstSchemaName field in /db2backup/db2bcup/pureApp_move/IBMDataMovementTool/migr/IDMTConfig.properties file to map source schema #### to destination schema name and run geninput command again to generate this file. "SCHEMA1"."TABLE1":SELECT * FROM "SCHEMA1"."TABLE1" "SCHEMA1"."TABLE2":SELECT * FROM "SCHEMA1"."TABLE2" [2015-01-02 220.127.116.11] Table list is saved in /db2backup/db2inst1/IBMDataMovementTool/migr/BCUDB.tables [2015-01-02 18.104.22.168] CustTableSpaceMapPropFile 0 values loaded from the properties file /db2backup/db2inst1/IBMDataMovementTool/migr/CustomTableSpaceBufferPoolMapping.properties You can remove tables that you do not want to migrate by editing SAMPLE.tables file ******* Target database information: ***** Do you have DB2 installed (Yes) : 1 Do you have DB2 installed (No) : 2 Enter a number (Default=1) : 1 Enter db2 database Host Name or IP Address (Default=localhost) : Enter db2 Port Number (Default=50001) : Enter db2 Database name (Default=SAMPLE) : Enter db2 database User ID (Default=dbuser) : Enter db2 database Passsword (Default=dbpass) : Verifying locations of db2luw's JDBC drivers JAR file '/db2home/db2bcup/sqllib/java/db2jcc.jar' found JAR file '/db2home/db2bcup/sqllib/java/db2jcc_license_cu.jar' found [2015-01-02 22.214.171.1249] Driver com.ibm.db2.jcc.DB2Driver loaded [2015-01-02 126.96.36.1995] Database Product Name :DB2/LINUXX8664 [2015-01-02 188.8.131.526] Database Product Version :SQL10055 [2015-01-02 184.108.40.2066] JDBC driver IBM Data Server Driver for JDBC and SQLJ Version = 3.69.24 [2015-01-02 220.127.116.116] Database Major Version :10 [2015-01-02 18.104.22.1686] Database Minor Version :5 [2015-01-02 22.214.171.1240] DB2 instance name is db2inst1 [2015-01-02 126.96.36.1996] DB2 Compatibility params varchar2_compat=OFF date_compat=OFF number_compat=OFF param04=ROUND_HALF_EVEN [2015-01-02 188.8.131.520] *** WARNING ***. The DB2_COMPATIBILITY_VECTOR is not set. [2015-01-02 184.108.40.2060] To set compatibility mode, discontinue this program and run the following commands [2015-01-02 220.127.116.110] For Oracle: [2015-01-02 18.104.22.1680] db2set DB2_COMPATIBILITY_VECTOR=ORA [2015-01-02 22.214.171.1241] For Sybase: [2015-01-02 126.96.36.1991] db2set DB2_COMPATIBILITY_VECTOR=SYB [2015-01-02 188.8.131.521] db2stop force [2015-01-02 184.108.40.2061] db2start [2015-01-02 220.127.116.111] DB2 Compatibility Vector=null [2015-01-02 18.104.22.1680] All input parameters are saved in '/db2backup/db2inst1/IBMDataMovementTool/migr/IDMTConfig.properties' file. [2015-01-02 22.214.171.1246] Command File /db2backup/db2inst1/IBMDataMovementTool/migr/geninput created. [2015-01-02 126.96.36.1998] Command file /db2backup/db2inst1/IBMDataMovementTool/migr/unload created. [2015-01-02 188.8.131.529] Command file /db2backup/db2inst1/IBMDataMovementTool/migr/rowcount created. [2015-01-02 184.108.40.2060] Command file /db2backup/db2inst1/IBMDataMovementTool/migr/rowexceptcount created. [2015-01-02 220.127.116.112] Command file /db2backup/db2inst1/IBMDataMovementTool/migr/fixcheck created. [2015-01-02 18.104.22.1683] Command file /db2backup/db2inst1/IBMDataMovementTool/migr/datacheck created. Input Data collection is complete now. Change directory to :/db2backup/db2inst1/IBMDataMovementTool/migr Run ./unload to extract data from db2luw After completion of 'unload' command, run if only DDL was selected : ./db2ddl.sh if only Data was selected : ./db2load.sh if both DDL and Data were was selected: ./db2gen.sh Instructions for using pipe or syncLoad between db2luw and db2luw Open unload.cmd in an editor. Set GENDDL=TRUE, UNLOAD=FALSE, OBJECTS=TRUE, USEPIPE=FALSE, SYNCLOAD=FALSE and save the file Run ./unload to extract and run ./db2ddl.sh to create all obejcts Open unload.cmd in an editor. Set GENDDL=FALSE, UNLOAD=TRUE, OBJECTS=FALSE, USEPIPE=TRUE, SYNCLOAD=FALSE and save the file Run ./unload It will extract and deploy data simultaneously *** Thank you for using IBM Data Movement Tool. ****
Phew, that’s a long process. Some of the defaults it’s offering me are because I ran this step multiple times as I figured things out and tried things – it mostly saves settings from the last time you ran it. Note at the end the instructions for using the files generated. It tells you to open unload.cmd in an editor – that’s for windows. For Linux and Unix systems, you just
vi unload and make your changes there.
What that configuration process does is to create some directories and files – in this case, /db2backup/db2inst1/IBMDataMovementTool/migr and subdirectories and scripts. My directory looked something like this:
ls -latr |grep -v loadlog |grep -v cis_ total 191436 drwxr-xr-x 2 db2inst1 redacted 4096 Jan 2 18:34 logs -rw-r--r-- 1 db2inst1 redacted 44087 Jan 2 18:34 CustomTableSpaceBufferPoolMapping.properties -rw-r--r-- 1 db2inst1 redacted 833 Jan 2 18:34 CustomNullMapping.properties -rw-r--r-- 1 db2inst1 redacted 1136 Jan 2 18:34 CustomDataMapping.properties -rw-r--r-- 1 db2inst1 redacted 830 Jan 2 18:34 CustomColNameMapping.properties -rw-r--r-- 1 db2inst1 redacted 759 Jan 2 18:46 db2tabstatus.sql -rw-r--r-- 1 db2inst1 redacted 512 Jan 2 18:46 db2runstats.sql -rw-r--r-- 1 db2inst1 redacted 676 Jan 2 18:46 db2loadterminate.sql -rw-r--r-- 1 db2inst1 redacted 1492 Jan 2 18:46 db2load.sql -rw-r--r-- 1 db2inst1 redacted 440 Jan 2 18:46 db2checkpending.sql -rw-r--r-- 1 db2inst1 redacted 552 Jan 2 18:46 db2tabcount.sql -rwxr-xr-x 1 db2inst1 redacted 1373 Jan 2 18:46 db2dropobjects.sh -rwxr-xr-x 1 db2inst1 redacted 749 Jan 2 18:46 db2checkRemoval.sh -rwxr-xr-x 1 db2inst1 redacted 1156 Jan 2 18:46 db2load.sh -rwxr-xr-x 1 db2inst1 redacted 771 Jan 2 18:51 rowexceptcount drwxr-xr-x 2 db2inst1 redacted 24576 Jan 2 18:54 data drwxr-xr-x 2 db2inst1 redacted 4096 Jan 2 19:29 msg drwxr-xr-x 2 db2inst1 redacted 4096 Jan 2 19:29 dump drwxr-xr-x 3 db2inst1 redacted 4096 Jan 2 19:35 .. -rw-r--r-- 1 db2inst1 redacted 13865 Jan 2 19:37 SAMPLE.tables -rw-r--r-- 1 db2inst1 redacted 8815 Jan 2 19:37 IDMTConfig.properties -rwxr-xr-x 1 db2inst1 redacted 2384 Jan 2 19:37 geninput -rwxr-xr-x 1 db2inst1 redacted 1418 Jan 2 19:37 unload -rwxr-xr-x 1 db2inst1 redacted 788 Jan 2 19:37 rowcount -rwxr-xr-x 1 db2inst1 redacted 1252 Jan 2 19:37 fixcheck -rwxr-xr-x 1 db2inst1 redacted 796 Jan 2 19:37 datacheck -rw-r--r-- 1 db2inst1 redacted 210413015 Jan 2 20:02 IBMDataMovementToolError.log drwxr-xr-x 6 db2inst1 redacted 4096 Jan 2 20:19 . -rw-r--r-- 1 db2inst1 redacted 13687 Jan 2 20:40 IBMDataMovementTool.log
A couple of important files there:
SAMPLE.tables The “SAMPLE” is the database name. This file contains all of the table names – you can remove individual tables or change the select statement for tables here. The tool happens to generate everything for a schema or list of schemas, and then you can remove tables or change the SQL.
IDMTConfig.properties This file holds a lot of the details specified at the command line/console. You can change values here.
unload the unload script is the important one if you’re doing load from pipe – it’ll do all the work for you.
I think that if you’re using pipes, you have to set usepipe=true in the IDMTConfig.properties file, and then re-generate stuff by executing the ./IBMDataMovementTool.sh script again. That or edit unload directly.
Be careful of the number of processes you allow – the numThreads variable in the IDMTConfig.properties file. I used 3, and it completely killed my 8-cpu target server – CPU pegged at 95% utilization. In my case, it’s a brand new server, and the highest priority on it is to get this data loaded, so that’s how I like it. But you’d want to be careful if you were using a database server that’s used in other ways.
Also, before you move on to execution, make sure you have a backup of your target database in case you need to stop the process and re-run it. I would not like to run this process into an existing database without testing it thoroughly before doing it in production.
Now, after you’ve got the configuration done, you have to actually execute the scripts generated. If using pipes, you have to first create the objects (which you can do either manually or using a different execution of this tool). In my case, I created them manually since I was going from row-organized DPF to column-organized single-partition tables.
Then it’s a simple case of executing the upload script. Remember to put it in the background and make sure disconnecting won’t cause a problem, because if you’re using it for a large amount of data, it will run for hours or days.
nohup ./unload &
While it’s running, it puts status out to stdout. It also generates .sql files and .loadlog files that you can use to follow it’s progress. Stdout looks like this:
[2015-01-02 19.45.09.984] Database Product Name :DB2/LINUXX8664 [2015-01-02 19.45.09.984] Database Product Version :SQL09051 [2015-01-02 19.45.09.984] JDBC driver IBM Data Server Driver for JDBC and SQLJ Version = 3.69.24 [2015-01-02 19.45.09.984] Database Major Version :9 [2015-01-02 19.45.09.984] Database Minor Version :5 [2015-01-02 19.45.09.992] Version = 9.5 [2015-01-02 19.45.09.992] Starting Blades [2015-01-02 19.45.09.994] Starting Blade_1 and building memory map. [2015-01-02 19.45.09.996] Starting Blade_2 and building memory map. [2015-01-02 19.45.10.000] Starting Blade_0 and building memory map. [2015-01-02 19.45.14.049] Waiting for DB2 Load thread to connect to the pipe ./data/SCHEMA1_table1.pipe [2015-01-02 19.45.14.049] DB2 Load  Connected to the pipe ./data/SCHEMA1_table1.pipe [2015-01-02 19.45.14.057] Waiting for DB2 Load thread to connect to the pipe ./data/SCHEMA1_table2.pipe [2015-01-02 19.45.14.057] DB2 Load  Connected to the pipe ./data/SCHEMA1_table2.pipe [2015-01-02 19.45.14.073] Waiting for DB2 Load thread to connect to the pipe ./data/SCHEMA1_table3.pipe [2015-01-02 19.45.14.073] DB2 Load  Connected to the pipe ./data/SCHEMA1_table3.pipe [2015-01-02 22.214.171.1248] Blocking inner load thread for "SCHEMA1"."table1" [2015-01-02 126.96.36.1994] Blocking inner load thread for "SCHEMA1"."table3" [2015-01-02 188.8.131.523] Released inner load thread for "SCHEMA1"."table1" [2015-01-02 184.108.40.2067] Blade_1 unloaded 42171 rows in 45.705 sec SCHEMA1.table1 [1/204:1] [2015-01-02 220.127.116.110] Waiting for DB2 Load thread to connect to the pipe ./data/SCHEMA1_table4.pipe [2015-01-02 18.104.22.1680] DB2 Load  Connected to the pipe ./data/SCHEMA1_table4.pipe [2015-01-02 22.214.171.1248] Released inner load thread for "SCHEMA1"."table3" [2015-01-02 126.96.36.1998] Blade_2 unloaded 58141 rows in 1 mins 0.626 sec SCHEMA1.table3 [2/204:1] [2015-01-02 188.8.131.525] Waiting for DB2 Load thread to connect to the pipe ./data/SCHEMA1_table5.pipe [2015-01-02 184.108.40.2065] DB2 Load  Connected to the pipe ./data/SCHEMA1_table5.pipe [2015-01-02 220.127.116.116] Blocking inner load thread for "SCHEMA1"."table5" [2015-01-02 18.104.22.168] Blocking inner load thread for "SCHEMA1"."table4" [2015-01-02 22.214.171.1247] Blocking inner load thread for "SCHEMA1"."table2" [2015-01-02 20.00.45.001] Released inner load thread for "SCHEMA1"."table5" [2015-01-02 20.00.45.006] Blade_2 unloaded 642468 rows in 14 min 34.378 sec SCHEMA1.table5 [3/204:1] [2015-01-02 20.00.49.009] Waiting for DB2 Load thread to connect to the pipe ./data/SCHEMA1_table6.pipe [2015-01-02 20.00.49.010] DB2 Load  Connected to the pipe ./data/SCHEMA1_table6.pipe [2015-01-02 20.00.49.576] Blocking inner load thread for "SCHEMA1"."table6" [2015-01-02 20.00.50.152] Released inner load thread for "SCHEMA1"."table6" [2015-01-02 20.00.50.155] Blade_2 unloaded 4183 rows in 5.149 sec SCHEMA1.table6 [4/204:1] [2015-01-02 20.00.54.161] Waiting for DB2 Load thread to connect to the pipe ./data/SCHEMA1_table7.pipe [2015-01-02 20.00.54.161] DB2 Load  Connected to the pipe ./data/SCHEMA1_table7.pipe [2015-01-02 20.02.08.469] Released inner load thread for "SCHEMA1"."table2" [2015-01-02 20.02.08.472] Blade_0 unloaded 1911852 rows in 16 min 58.466 sec SCHEMA1.table2 [5/204:1] [2015-01-02 20.02.12.480] Waiting for DB2 Load thread to connect to the pipe ./data/SCHEMA1_table8.pipe [2015-01-02 20.02.12.481] DB2 Load  Connected to the pipe ./data/SCHEMA1_table8.pipe [2015-01-02 20.34.02.734] Blocking inner load thread for "SCHEMA1"."table8"
Now it says for each table how much time it took, but for pipes, the text is misleading. It says something like:
[2015-01-02 20.02.08.472] Blade_0 unloaded 1911852 rows in 16 min 58.466 sec SCHEMA1.table2 [5/204:1]
Though it says “unloaded”, it’s also loading during that time if you’re using load from pipe, so it’s the combined export and load time.
I also watched the load on my source database while running, and while it was heavy, it was nowhere near what the target database sees. My older server was about 10-30% taxed, with older, slower cores and other activity going on. Your mileage may vary on that, and it is certainly something to watch while you’re testing on your own environment.
Like many tools, this is far from perfect, and if you need something more robust, you may want to consider HPU (High Performance Unload). But if you’re looking for free, then this tool works perfectly well, and does more than a reasonable DBA can quickly script when it comes to loading from pipe.
Side note: for a variety of reasons, load from pipe is not the best choice for column organized tables. A blog post is coming on that soon.
Locks on Source DB
By default, locks are acquired on the source database – they are share locks, but still. If the utility is cancelled part way through, these locks may not be released – the connections must be forced off to release the locks.
A better solution is to alter the SAMPLE.tables file and add “with ur” – if it meets your needs – at the end of each select statement.
If the unload just seems to hang without progress, look at the .loadlog files it creates. These may list error messages, such as permissions issues, that are not passed up to the other output and error files.
If you cancel out in the middle of a load, the IBM Data Management tool should create a script called
db2fixtables.db2 to get target tables out of load pending state. You have to manually execute this file to do so. After canceling out, make sure that your tables are not in load pending.
Also ensure that you force any remaining connections off of the source database. They are relatively easy to identify, using the application name that starts with ‘IDMT’.
Comparing Load from Pipe with the IBM Data Movement Tool and Without
Something I noticed fairly consistently was that load from pipe was much slower when done via the IBM Data Movement tool than the same exact load from pipe done without the IBM Data Movement tool. I thought I was just mistaken, so I did the exact same load from pipe using the tool and not. Here it is using the IBM Data Movement tool:
Database Member 0 -- Active -- Up 1 days 21:15:42 -- Date 2015-01-08-10.19.52.283825 Utilities: Address ID Type State Invoker Priority StartTime DBName NumPhases CurPhase Description 0x000000020468EEC0 17 LOAD 0 0 0 Thu Jan 8 08:06:21 BCUDB 4 4 [LOADID: 642.2015-01-08-08.06.21.057529.0 (10;4)] [*LOCAL.db2bcup.NNNNNNNNNNNN] OFFLINE LOAD DEL AUTOMATIC INDEXING REPLACE NON-RECOVERABLE SCHEMA.TABLE1 Progress: Address ID PhaseNum CompletedWork TotalWork StartTime Description 0x000000020468F248 17 1 0 bytes 0 bytes Thu Jan 8 08:06:21 SETUPo 0x000000020468F400 17 2 97110306 rows 97110306 rows Thu Jan 8 08:06:21 ANALYZEl 0x000000020468F588 17 3 97110306 rows 97110306 rows Thu Jan 8 10:10:02 LOADm 0x000000020468F710 17 4 3 indexes 3 indexes Thu Jan 8 10:18:45 BUILD Database Member 0 -- Active -- Up 1 days 21:15:47 -- Date 2015-01-08-10.19.57.298686
In this test, the ANALYZE phase took 1 hour and 4 minutes, the LOAD phase took 8 minutes, and the BUILD phase took about a minute.
And here it is done manually – same table same network, same servers:
Utilities: Address ID Type State Invoker Priority StartTime DBName NumPhases CurPhase Description 0x00000002023DF5E0 13 LOAD 0 0 0 Thu Jan 8 06:30:53 BCUDB 4 4 [LOADID: 482.2015-01-08-06.30.53.492253.0 (10;18)] [*LOCAL.db2bcup.NNNNNNNNNNNN] OFFLINE LOAD DEL AUTOMATIC INDEXING REPLACE NON-RECOVERABLE DBA .TEMP_TABLE1 Progress: Address ID PhaseNum CompletedWork TotalWork StartTime Description 0x00000002023DF968 13 1 0 bytes 0 bytes Thu Jan 8 06:30:53 SETUPo 0x00000002023DFB20 13 2 97110306 rows 97110306 rows Thu Jan 8 06:30:53 ANALYZEl 0x00000002023DFCA8 13 3 97110306 rows 97110306 rows Thu Jan 8 06:58:13 LOADm 0x00000002023DFE30 13 4 3 indexes 3 indexes Thu Jan 8 07:07:00 BUILD
ANALYZE phase took 28 minutes, the LOAD phase took 9 minutes, and the BUILD phase was about a minute. I used different target table names, but everything else was the same – same script used to create the table, everything. And yet the ANALYZE phase was 50% faster when done manually than with the IBM Data Movement tool. And no, it cannot be bufferpool being populated on the source server, because I did the faster one first.
I cannot explain the difference, but I noticed it consistently.
Don’t use load from pipe – use load from cursor instead, or even load from files. Even if you use load from pipe, don’t use the IBM Data Movement Tool to do it. But if you have to handle a number of tables at once, the IBM Data Movement tool might be something to test and try. When you choose instead to load from cursor, it is much easier to script for yourself, and you may not need such a tool. I imagine the benefits would be more clear when the source database is not DB2 LUW.
Just one note: When I tried to migrate an Oracle db to DB2 last december I found to my surprise that Data Movement Tool has been 2 years obsolete now.
IBM currently recommends to install Data Studio 4.1.1 and the Database Conversion Workbench (DCW) 3.1.1 as a plug-in to it so you can migrate Oracle / SQL Server / DB2 to DB2. I have installed it but haven’t done any migration yet. I will start next week hopefully. Don’t know what the result will be because I was used to DMT.
Oh, one more thing: DCW works only if the target database is DB2 10.5
For older versions of target DB2 If not, keep on using DMT.
I plan to carry out database migration from Oracle to DB2 using Database Conversion Workbench (DCW) utility. I was wondering whether the constraints (primary key, foreign key) are automatically checked for consistency (integrity) following the data loading step? In other words, does the DCW utility provide automatic database consistency check? Or is it required to use the command SET INTEGRITY OFF before starting data transfer and then use command SET INTEGRITY IMMEDIATE CHECKED after migration in order to check primary key and foreign key constraints consistency after migration.
Also, could you please elaborate on the script “db2checkRemoval.sh” that is generated after data loading. Has it t do anything with automatic operation of SET INTEGRITY command?
An early response will be highly appreciated.
Thanks in advance!
I haven’t used DCW much, so I don’t have answers. I would recommend testing with a test table or two. Generally if anything uses LOAD statements, it will have to run set integrity statements afterwards. If it is instead using IMPORT, set integrity is not needed since the integrity checks are done during the process. When LOADing, the set integrity steps may take as long as or longer than the LOAD step. Setting integrity off has very limited uses and is not a normal part of most data transfer plans. It is more often used when adding a generated column or something like that.
JSVictor, have you tried DCW? How did you solve the tablespace-bufferpool issue and all the tables being created in the same (default) tablespace (due to lack of IN and INDEX IN clause during table creation) ? I have tried to solve this but never succeeded with DCW.
IDMT created bufferpools and tablespaces fairly well (and every table had an IN clause and INDEX IN clause and some LONG IN clause). So I don’t quite understand how to achieve the same effect now.
Hi ember crooks,
i am getting below error when run the ./unload script.
[2016-03-07 03.02.46.820] Version 2.00-b2148
[2016-03-07 03.02.46.821] OS Type:Linux
[2016-03-07 03.02.46.821] Java Version:1.7.0_79: 24.79-b02: OpenJDK 64-Bit Server VM: Oracle Corporation: 64 bit
[2016-03-07 03.02.46.821] Default encoding UTF-8
[2016-03-07 03.02.46.821] TABLES_PROP_FILE:y/TSDB.tables
[2016-03-07 03.02.46.821] DATAMAP_PROP_FILE:datamap.properties
[2016-03-07 03.02.46.821] colsep:~
[2016-03-07 03.02.46.822] dbSourceName:oracle
[2016-03-07 03.02.46.822] dbTargetName:db2luw
[2016-03-07 03.02.46.822] threads:5
[2016-03-07 03.02.46.822] server:servercom
[2016-03-07 03.02.46.822] dbName:TSDB
[2016-03-07 03.02.46.822] port:1521
[2016-03-07 03.02.46.822] uid:system
[2016-03-07 03.02.46.822] ddlGen:true
[2016-03-07 03.02.46.822] dataUnload:true
[2016-03-07 03.02.46.822] extractObjects:true
[2016-03-07 03.02.46.823] usePipe:false
[2016-03-07 03.02.46.823] syncLoad:false
[2016-03-07 03.02.46.823] loadReplace:true
[2016-03-07 03.02.46.825] OUTPUT_DIR is : /home/db2inst1/idmt/y
[2016-03-07 03.02.46.826] Configuration file : ‘y/IDMTConfig.properties’ not found. You need to run ‘IBMDataMovementTool’ command to create proper setup.
but IDMTConfig.properties file is there in y directory.. can you help me out this issue…
Can we use IBM Data Movement Tool for DDLs and data movement from DB2 ZOS to DB2 LUW?
I have a challenge to migrate everything from DB2 ON ZOS to Db2 LUW.
I can manually create db2 objects like data parition group,storage group, bufferpools.
We have lot many tablespaces and tables in ZOS for which does this tool helps in converting ZOS DDLs(Tablespaces and tables with data) to db2 luw with DDLs and data? How about referential constrains taken care?
I don’t believe it works for that. It does for LUW to LUW and for conversion from Oracle. It does not support SQL Server. I’d love to hear otherwise if you manage to get it to work.
Are you done migration challenge from zOS to DB2 LUW and have a good result may be? Please, let me know if it so, because i have the same task and really want to know a way to approach.
Hi Ember, I’m trying to install this on a Unix server, but when I run ./IBMDataMovementTool.sh, I get the following error.
Exception in thread “main” java.lang.UnsupportedClassVersionError: ibm/RunTool : Unsupported major.minor version 51.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.security.AccessController.doPrivileged(Native Method)
The doco says that a prerequisite is java 1.5 or higher which I have. Any ideas on what that error is about.
$ java -version
java version “1.6.0_41”
OpenJDK Runtime Environment (IcedTea6 1.13.13) (rhel-126.96.36.199.el7_3-x86_64)
OpenJDK 64-Bit Server VM (build 23.41-b41, mixed mode)
What is the DB2_COMPATIBILITY_VECTOR for Netezza
Unfortunately, Netezza compatibility is more complicated than just a compatibility vector. You’ll likely set the SQL_COMPAT variable to NPS.
I am getting below error
[2019-12-20 188.8.131.522] Driver com.ibm.as400.access.AS400JDBCDriver loaded
java.io.FileNotFoundException: IDMTConfig.properties (A file or directory in the path name does not exist.)
Exception in thread “main” java.awt.HeadlessException:
No X11 DISPLAY variable was set, but this program performed an operation which requires it.