I do at least one restore a week. Usually between environments or for setting/resetting HADR, but sometimes real-world recoveries too. Since I’m almost always restoring from online full backups, I’ve developed my restore method and stick to it. It works for the relatively homogenous and standard enviroments that are the bulk of what I support. I’m pretty sure I can do(and possibly have done) a basic restore with extracting logs in my sleep at this point.
But when you’re first doing a restore with rollforward, or if you haven’t done one for a while, you don’t have the intuitive feel for where your transaction log files are and where they need to be.
When Transaction Log File Location Is An Issue
If you’re lucky enough to be able to take offline backups and restore from them without rolling forward through any log files, then you won’t need the content in this particular post. I live in the real world of e-commerce databases – even dev/qa/staging environments are expected to be up 24/7. So it’s a rare day when I get to actually take an offline backup. If you are so lucky, please just include the keywords “WITHOUT ROLLING FORWARD” on your restore command, and you won’t have to rollforward or deal with transaction log files. Obviously that only works if you have no specific time requirement for your restore.
If you’re restoring a database into itself and rolling forward and you have all of your transaction log files needed on disk or archived to TSM, then DB2 should be able to find the needed log files. But that’s my rarest type of restore. Much more often, I’m using restore to restore a database from one server to another, and that’s when you need to be most aware of where your log files are.
I also remember witnessing a small nightmare when I had been a DBA for less than 3 months. A colleague did a restore without rollforward into an existing database. They were trying restoring to different points in time because they didn’t know exactly when an error occurred. But when she did the restore without rollforward, it deleted all the “future” transaction logs. So every time I do a restore into the same database the backup was taken from, I now copy off all transaction logs that I might need in case I end up needing to do that rollforward to a later time than initially specified.
Extracting Transaction Log Files From a Database Backup Image
If you took an online backup (and included the “INCLUDE LOGS” keywords if before 9.5, where they became the default), then you have log files included with the backup image. BUT these log files only give you enough to rollforward your database to the end of the backup image. That may seem an obvious thing – that the backup image can’t include log files after it finished the backup – but I’ve had clients who were confused by that.
In order to extract the transaction log files you need from the backup image, you must include the LOGTARGET keyword (I believe you can also get the logs without restoring the database by using the LOGS keyword on the restore command). When specifying LOGTARGET, you must specify a directory that exists and does not have log files by the same name in it. If you don’t, you will get error SQL2581N at the end of the restore. The whole restore failing after being almost complete because the logs could not be extracted is frustrating.
My Normal Procedure
So my normal restore procedure looks like this:
- rm /db_bkup/logs/*.LOG
- db2 restore db sample from /db_bkup taken at
into sample logtarget /db_bkup/logs replace existing without prompting - db2 get db cfg for sample |grep -i LOG – note the path to the active log files
- cp /db_bkup/logs/*.LOG /PATH/TO/LOG/FILES
- db2 rollforward db sample to end of backup and stop – love the “to end of backup” syntax introduced in 9.5
Where Other Transaction Log Files Might Come From
If you’re not just rolling forward to the end of a backup image, but to a specific timestamp after the backup, you have to make sure you have the right log files. Many of my clients archive logs only off to a different filesystem – if you’re retaining logs for recovery, you should be archiving them somewhere even if it’s just a different directory on the same filesystem to help prevent compressing or deleting an active log file (yes, I’ve seen someone do that to a production database – it is NOT fun to recover from). If you have TSM or another tape library, you may be archiving them directly to TSM, though watch your TSM setup to make sure you have the appropriate redundancy there.
If you archive logs to disk and then back that disk up, and have to restore log files to complete a rollforward, then you may run into space issues. I’ve got more and more clients on virtual or cloud servers, and in these cases, you can actually have a filesystem created to hold the log files while you’re rolling forward, and then have that same filesystem blown away when you’re done to reduce costs. On the actual rollforward command, you can use the OVERFLOW LOG PATH keywords to give DB2 an additional place to look for log files.
Similar to the above, if you’re compressing log files at the OS level (using gzip or similar), and suddenly need them all uncompressed to support a rollforward, you can have a filesystem created just for holding the uncompressed log files, specify it using he OVERFLOW LOG PATH keywords on the rollforward command, and then blow it away when you’re done with the restore.
You can also specify the OVERFLOW LOG PATH either at the db cfg level(OVERFLOWLOGPATH) or in the rollforward command. If you’re using TSM, this will give DB2 more space to extract log files to and make sure you’re not waiting on TSM calls and disk deletes for every step of the rollforward. Consider using the NORETRIEVE rollforward keyword as a part of this strategy.
Where DB2 Looks For Transaction Log Files During Rollforward
There really is diversity in how you’ve got transaction logs set up, so there are actually a lot of locations where DB2 will look for a log file before giving up and throwing an error. DB2 looks in the following places, in order:
- Path to log files in the db cfg
> db2 get db cfg for tempload|grep "Path to log files" Path to log files = /db_logs/TEMPLOAD/NODE0000/
- MIRRORLOGPATH in the db cfg
> db2 get db cfg for tempload|grep MIRRORLOGPATH Mirror log path (MIRRORLOGPATH) =
- OVERFLOWLOGPATH in the db cfg OR OVERFLOW LOG PATH on the rollforward command
> db2 get db cfg for tempload|grep OVERFLOWLOGPATH Overflow log path (OVERFLOWLOGPATH) =
- LOGARCHMETH1 in the db cfg
> db2 get db cfg |grep LOGARCHMETH1 First log archive method (LOGARCHMETH1) = DISK:/db_arch_logs/SAMPLE/
No matter what type of location this is – DISK, TSM, Vendor Routine, or even if you’re using the old USEREXIT, DB2 retrieves log files from this path into the active log path OR overflow log path, rolls forward through the data in them, then deletes them from the active log path – a whole lot of I/O, especially when you’re rolling forward through a lot of log files.
- LOGARCHMETH2 in the db cfg
> db2 get db cfg |grep LOGARCHMETH2 Second log archive method (LOGARCHMETH2) = OFF
- FAILARCHPATH in the db cfg
> db2 get db cfg |grep FAILARCHPATH Failover log archive path (FAILARCHPATH) =
While We’re Talking About Log Locations
One frustration that you may not have run into, but you may someday is restoring a database with an active log path that does not exist on the server you’re restoring to. In this case, if you try to set NEWLOGPATH between the restore and the rollforward, it won’t take. The reason is that NEWLOGPATH takes effect on database activation, and if you’re in a rollforward pending state, you cannot activate the database.
Because of this scenario, there’s an option to specify the new log path on the restore command using the NEWLOGPATH keyword. If you wait until the restore completes without using this keyword, it’s too late – you have to re-do the restore with the keyword to change the active log path.
If You’re Archiving To TSM or Other Tape Library Directly
Using TSM for the archiving of your log files directly is a nifty way of managing them (assuming your tsm maintains appropriate multiple copies). If so, you might actually dual-path during your restore. While the database restore itself is running, assuming your architecture supports it, you could be extracting log files to a path you will use as your OVERFLOW LOG PATH on rollforward (or extract in parallel with the rollforward, or in whatever point in the process you like), and then use the NORETRIEVE option on the rollforward to tell DB2 not to look for log files on TSM. You can also use NORETRIEVE on the the rollforward to limit how many log files are applied while using rollforward “TO END OF LOGS”.
I also wonder – when I’m restoring to the same server the backup was taken on, (and assuming I happen to archive log files to disk), if I specified the overflow log path to be the same as the archive log path, would it speed up my rollforward by avoiding the retreive-and-delete process that archive log files go through?
References
DB2 Info Center Entry on ROLLFORWARD: http://www.ibm.com/support/knowledgecenter/SSEPGG_9.7.0/com.ibm.db2.luw.admin.cmd.doc/doc/r0001978.html
The brain of Melanie Stopfer – Thanks, Melanie for answering some technical questions on this for me, and inspiring me to write a blog post on it.
DB2 Info Center Entry on RESTORE: http://www.ibm.com/support/knowledgecenter/SSEPGG_9.7.0/com.ibm.db2.luw.admin.cmd.doc/doc/r0001976.html
Thank you very much for sharing this.
I have a similar requirement for restoring a database on one server from a backup of a database on another server. However, I will be taking tablespace level backups.
I think the process for this would be similar to restoring from an online database backup.
I will be performing the below steps for each tablespace restore:
1) rm /db_bkup/logs/*.LOG
2) db2 restore db sample tablespace(T1) from /db_bkup taken at logtarget /db_bkup/logs without prompting
3) cp /db_bkup/logs/*.LOG /PATH/TO/LOG/FILES
4) db2 rollforward db saple tablespace(T1) to end of backup and complete
Can there be a common log file in two tablespace backups ? If I overwrite a log file in active log path between two restores, will this lead to any issue in the state of database ?
Also, after rollforwarding to end of backups, does your database go into a backup pending state ?
IBM mentions that rollforwarding to a point in time will casue backup pending sate.
I haven’t done much with tablespace restores – the last time I tried to was back when you could only do it on the same place the backup was taken, and given the designs of the databases I’ve supported, that’s of limited usefulnees. I would think would want to restore all of the tablespaces first before doing the rollforward. I would also imagine that you would not want to overwrite the log files, and would instead want to make the logs available somewhere else – using the overflow log path option on the rollforward command might make the most sense here.
You always want to take a backup after a restore so that you have an easy restore path going forward. I’ve not seen the database placed in actual backup pending state, though.
Have a couple of queries regarding your backup/restore process.
What is the size of you database backup image, how much time does the backup and restore take?
Are you using TSM or any other backup/restore tool ?
My restore(offline) is takes around an 5-6 hours for a backup image file of size 150 gb.Is it normal for restore process to take this long?
Thanks in advance.
There are a lot of different factors in the backup/restore time. Generally the rule of Thumb is that a restore without roll forward should take between 1 and 1.5 times as long as the backup took. Backups/restores to disk are nearly always faster than to TSM. If you’re using TSM, I would look at how you’re connecting to TSM – I’ve seen extreme backup times caused by slow connections to TSM. TSM itself can also be slow depending on configuration – it should have enough disk cache to write your backup to disk and then be able to move it off to tape asynchronously. Disk speed is also an issue, of course – both backup and restore are very heavy hitters for disk I/O. If it’s pure disk you’re dealing with, a friend of mine, Brian Fairchild, recently posted a couple of blog entries on speeding up backups on AIX: http://blogs.xtivia.com/home/-/blogs/speed-up-db2-database-restore-on-aix-part-2-?_33_redirect=null
The timing you describe does seem a bit too long. I take backups to disk (which are then backed up from disk to my clients’ various backup solutions), and don’t have a single backup at the moment that takes longer than 45 minutes. I also don’t currently provide primary support for a database larger than 100 gb in size. If I have no other information, I guess about 30 minutes or lees for each 50 gb to disk (varies with hardware, of course, and TSM may be longer). That means my guess for restoring a 150 gb database would be about 2.25 hours – significantly shorter than what you’re describing.
Thanks for the reply. We use a product called upstream for backup/restore. The backup on disk takes about 55-60 mins to complete. The backup taken using upstream completes in about 20 mins.
The restore from disk and from upstream takes about the same time – 5 hours. For the first 3 hours of restores, db2 acquires container for all the tablespaces (reading only about 300mb of 150gb backup image). Once, db2 completes acquiring all the containers, the restore completes in about 2 hours.
Definitely look at Brian’s posts – both what all he tried and what finally worked – it sounds like a similar scenario. http://blogs.xtivia.com/home/-/blogs/speed-up-db2-database-restore-on-aix-part-2-?_33_redirect=null
Even if you’re on a different OS, they might give you ideas on what to look at.
Excellent article! Thanks so much for this! This plus information center helped me with practicing disaster recovery scenarios with a soon-to-be production monitoring system (IBM Tivoli Monitoring) using DB2 LUW as the underlying database.
We were able to reduce the restore time from 5 hours to 2 hours. DB2 took the first 3 hours of restore to acquire container for all the tablespaces. This was happenning because the file system that we were using – ext3 did not allow fast container build. We changed our filesystem to ext4 and our restore time reduced to 2 hours. XFS filesystem also supports fast container build.
I’d like to share my experience as well:
We have SAP running on DB2 (DB size 2TB) and we are able to restore it around 2 hrs using the following:
TSM LAN-free config (this depends on your hardware infrastructure)
db2set DB2_DIRECT_IO=NO (this needs to be set before running restore cmd, and unset it before you open the database to users)
TSM sessions=4 (same as backup sessions)
Parallelism=16 (a multiple of TSM sessions)
num_iocleaners=16 (a multiple of TSM sessions)
util_heap_sz=524288 (2GB)
Aditional tips:
Make sure to delete any *.LOG files from LOGTARGET directory otherwise if you forget to do it, the restore will fail at the end when it tries to extract the included logs. This is horrible specially for long restore times.
If you change the extensize of your biggest tablespaces (i.e. from 2 -> 32) it also helps to restore quicker the tablespace but it would be better if you move your biggest tables into their own tablespace.
For num_iocleaners and util_heap_sz you need to create an empty/skeleton database before restoring otherwise the backup image overwrites the optimum values above (and just use ‘replace existing’ in your restore cmd)
hi.
You can perform the following action:
THROUGH recover a full backup a single tablespace?
I think you’re asking if a single tablespace can be recovered from a full database backup. I believe the answer is yes, though I have not tried it. There is a tablespace clause on the RESTORE DATABASE command.
Hi, Ember,
First of all, thanks for the post – very useful, as I’m just beginning to get started with DB2. Between various posts you’ve made, and some other online/electronic doucmentation, I think(!) I’m getting a better feel for things. Still trying to get a slightly better handle on the logs for purposes of backup/restore, though. Having come from a primarily MSSQL environment, things are just a bit different in the DB2 world! 🙂
If you don’t mind, though, I’d like to ask a question (or two) to ensure that my current perception is correct. (In advance: I apologize for the length of this post.) Basically, with a non-circular logging setting (LOGARCHMETH1 != OFF), log files are continuously written to (assuming no other parameters set). Some of these are Active, and some of them are Archived. (Archived appears to mean that no additional transactions are being written to the log file named Snnnnnnn.log.) Logs themselves appear to be manually managed within that directory. (although I thought I ran across some automated pruning functions somewhere…)
Ultimately, this comes to the statement you made above regarding a restore without roll-forward – i.e. to the end of the logs included in a full backup when using INCLUDE LOGS. (If I’m recalling correctly, perhaps this is a “db2 restore and complete”.) So, at this point, the system deletes all of the future logs – i.e., if S0000407 to S0000520 aren’t used as part of the roll-foward and are deleted, and (correct me if I’m wrong here) only the last logs used for this particular database restore point are available. Future transactions might begin building new log files at 407, and go from there, correct (hopefully)? (Essentially, this begins a new transaction timeline for the database.)
So, in the above, let’s assume that the full database backup that was restored above is 4 days ago. We can no longer roll forward because logs S0000482+ (which occurred post backup only 2 days ago) are no longer available. However, we would essentially be able to restore the 2 day-old backup to whatever point in time the INCLUDEd LOGS indicated as part of the backup, correct? (Although, there might be an odd gap in the directory, as several log files would have been deleted by the prior restore…)
Finally, a follow-up. Let’s assume that I do a restore and roll-forward to a specific point in time, and that point in time is somewhere in the middle of the transactions in a particular log file. Assuming that future log files are still deleted (in order to maintain a consistent transaction timeline), what happens to the log file? Are future transactions in that log file truncated?
Sorry if these seems like silly questions, but am I hideously off anywhere here?
Thanks in advance for your time and advice!!
Ok, replying in-line to keep things clear:
There are really essentially two kinds of logging – archive and circular. Within archive, you can either have logarchmeth1 and such set to automate the location for archive logs to go to, or you could just allow the archive logs to accumulate in the active log path (not recommended). In either case, DB2 uses write-ahead logging. Meaning that transactions are written to the log files synchronously and that actual writes to the tables and such happen asynchronously at some point after the commit. The log files are continuously written to no matter what logging mode is used. Archived means that there are no active, uncommitted transactions in the log, and db2 is no longer writing to it. You can manage the logs automatically if you desire using the auto_del_rec_obj database parameter, along with rec_his_retentn.
Essentially, yes. Older transaction logs are not deleted, so you could re-do the restore/rollforward to a point in time older than your previous restore, but not newer than it. If you have space, most dbas copy off the logs before doing a rollforward to avoid having something they need deleted – in case they have to re-do the restore to a later point.
Right. Online backup images now always include log files needed to restore to the end of the backup. This was not always true if you have some really old version hanging around. To use them, you have to restore the more recent backup, extracting the logs using the logtarget keyword. You then would roll forward to “END OF BACKUP”, possibly specifying the location you extracted the logs to as the overflow log path.
Rollforward will both rollforward through transactions and also rollback any transactions that were in progress but not committed at the time you specified. The log file will then be either used from that point forward without the future transactions or will be archived as a partial with the committed transactions only, and new log files allocated – I’m not sure which off the top of my head.
Does that answer your questions?
Wasn’t expecting this quick of a response, thank you. I think it does answer my questions – at the very least, it doesn’t sound like I’m too far off (which is a relief).
I think I was a little unclear in the “continuously writing logs” section – what I was intending to convey was the creation of additional, new log files rather than reusing old log files (circular logging). Sorry about that! I also need to look into rec_his_retentn a little more – found that one a bit confusing.
Your response does bring up a couple of other thoughts:
a) With respect to log target and overflow log path – is it a correct interpretation that, essentially, this alternate path is kind of a temporary location, and that the system will, after restore/roll forward/complete, begin writing necessary logs to the normal log path?
b) Assuming that the necessary logs exist already in normal log path, the system can roll forward just using those, and it is not necessary to extract the logs, correct?
Ultimately, I think the hardest thing is going to be keeping track of and identifying all of the log files that one might want/need at any given point, especially if restores have happened in the middle of daily operations. Hopefully I’ll be able to come up with a way to automate that to some degree…
Thanks again for sharing your knowledge!
Hi, Ember – I believe this does answer my questions (and thanks for the quick response) – and nice to know that I’m not too far off base with what I’ve learned so far. I still have a bit of reading to do with respect to re_his_retentn, as well as some testing to do. At the very least, at this point, I can do full backups periodically and get to that point in time. Will work on improving that process…
Thanks for sharing your knowledge with me and everyone else!
Hi Ember ,
Query regarding change of timestamp in history file when rollforward is done for a db .
Steps : (note: logs will be deleted after log backup )
1.Ran full online backup (data + logs )
2.Ran some transactions and ran log only backup .
3.Ran again full online backup ( data + logs )
4.Again ran step 2 .
5. Now dropped the db .
6.Restored it from backup ran at 1 .following by a Point in time rollforward to apply the logs from 2.
Now history info is showing different time stamps for log files that are applied from 2 .
Is this expected behavior ?
It may change the transaction log files that were active as of the time of the backup, because when it rolls forward through those files, it may also have to roll transactions back that were not completed at the time of your roll forward, and it then starts using those files.
Before any complicated restores, many dbas take the transaction logs and copy them off elsewhere if they have space, because a number of restore and rollforward actions may change or delete log files.
“I believe you can also get the logs without restoring the database by using the LOGS keyword on the restore command”
I badly wanted this to happen, but the logs parameter estores the logs. 🙁 I am going to try using logtarget.
Yes, you want to use the logtarget parameter if you wish to extract the logs from an online backup that has the logs included. That will extract the logs from the backup but not restore them or rollforward through them.
Hi, we have just had a problem regarding a restore of a production database as a Pre-production database in another server with about half of the RAM of the production server. The dynamics are:
– We backup the production database (db2pro) at the production server.
– We scp copy the backup to the pre-production server
– We restore the production database into a preproduction database (db2pre)
In the pre-production server there’s another database consuming resources in a server that already has less than half of the memory of the production server.
To achieve this we set the registry variable DB2_OVERRIDE_BPF in a way that allows the database to be restored fit in the pre-production server.
However, last friday our restore procedure failed once and again due to lack of memory. We have spent the whole weekend crazy to figure out the cause and after a million tests we found out the following:
In our restore script we wrote the line:
db2set ‘DB2_OVERRIDE_BPF=2,200000;3,1000;5,35000;6,90000;7,1600’
but bash interprets it as several commands using ; as a separator. We finally (!!) fixed it by simply quoting the sentence like this:
db2set ‘DB2_OVERRIDE_BPF=2,200000;3,1000;5,35000;6,90000;7,1600’
and it worked fine. So watch out your quotations !!!!!!!!!!
Now that we are done, I’m going home to have a peaceful heart attack.
Ah the peaceful post-issue heart attack. Thanks for sharing your experience.
.Hi Ember,
I’m having issue on restoring DB2 backup, I have only 1 backup and it was
an online backup upon troubleshooting the issue i accidentally deleted
the logs in DB2, leaving me 1 option, restore online backup with out
logs..
I already executed the command “db2 rollforward db mydb to end of backup
and complete” but still it throws error on logs, my question is, is there
a way that i can restore an online backup without the logs? Please help i
need to fix it ASAP!
SYSTEM: IDES EHP4
OS: Linux (Centos)
Database: DB2
Carlo
What version of DB2? If a late enough version, then the minimum logs needed for restore would have been included in the backup image itself (or if you use the INCLUDE LOGS keyword on backup). If they were included, you can use the LOGTARGET keyword on restore to specify a directory the logs should be extracted to. If they were not included, and you have deleted them, then your only hope is to call IBM DB2 support and see if they can help you – sometimes they can help you flip a bit and get a version of the data back.
Hello Ember,
I have 2 database in different servers and file systems ,one is the prod server second one is test server..both database are configured to automatic storage.i have one task db refresh.
my doubt is here.
1.we need perform redirect restore or normal restore..?
2.i believe databases having automatic storage,db2 will taking care of creating containers right.
db2 version v9.7 fixpack9 on both servers..
If all tablespaces either have their containers created on the default path, or are automatic storage, then you don’t have to redirect. If the filesystems were the same in name and permissions and size, you could simply issue the restore command. If they are different, then you likely need the
clause of the restore command and possibly the
clause. You can generate a redirect script using the
and
clauses. This does not actually restore the database, and gives a fairly accurate idea of whether you have any tablespaces that need a redirect.
Thanks for your quick response..really appropriated..
i have another doubt,regarding restore and rollforword.
we need to delete or move the target database existing active logs and archive logs before doing the restore and rollforword…?
The restore and rollforward will remove any active logs for you. Archive logs will start another chain, so the old ones should not be deleted, but they should also not cause a conflict.
Hi Ember,
Could you please differentiate between
1) “end of logs and complete overflow log path(”)” and
2) “end of backup and complete” and
3) “end of backup and stop”
Thanks in advance !!
1) “end of logs and complete overflow log path(”)”
Not sure it works with an empty overflow log path. But this will rollforward the database to the last transaction log file that db2 can find on the active, archive, or overflow log paths. After issuing a rollforward to end of logs, you cannot go back and issue a rollforward to some other point like a specific time or end of backup
2) “end of backup and complete”
This will rollforward to the last point in time during the time the online backup – often right to the end of when the online backup was running. I very much like this syntax, as before it was available we had to pull out a timestamp and rollforward to it.
3) “end of backup and stop”
Exactly identical to the above. stop and complete are synonyms, with complete being the more recent one and stop still working for backwards compatibility.
Does that answer your question?
I am having a source database running on windows 2000 & Db2 version 8.2. Now the plan is to take backup & restore it into a target server having similar OS & DB2 versions & also exactly similar drive configurations.
Firstly, there is a plan to run a trial restore by using full online backup image with include logs option enabled & restore it on target with LOGTARGET to extract logs included in backup image.Once this restoration is successful after few days on final migration date we will be restoring the databse on target by replace existing from a full offline backup image by using the following command.
db2 restore db from taken at dbpath on newlogpath without rolling forward without prompting
As INCLUDE LOGS option is invalid for an offline backup so definitely LOGTARGET need not to be defined during offline restoration but since LOGTARGET is already defined as part of online restoration & on top of it we will restore full offline image using replace existing, so will it create any inconsistency or is it the right approach of performing offline restoration on a DB which already has LOGTARGET defined ?
You cannot use LOGTARGET when restoring from an offline backup for exactly the reason you list.
Just for the reference,
When I have done a restoration of database on db2 v 9.7 in the same server(after dropping the existing db) without changing anything but just issuing the command “restore database sample taken at timestamp without rolling forward” ,
I was confused why in my newly restored database the DBPATH was changed to DFDBPATH and also the active log directory was changed in one instance and in the other instance only the DBPATH was changed but the active log directory didn’t change . Upon searching knowledge center came across this article which describe the implications of restore https://www.ibm.com/support/knowledgecenter/SSEPGG_9.7.0/com.ibm.db2.luw.admin.dbobj.doc/doc/r0024234.html. Hope this will be useful to others.
But still my doubt is in
db2inst1: sample restored with changes in DBPATH and ACTIVE LOG directory and in instance
db2inst2 : Sample2 restored with changes in DBPATH only and the active log directory was still pointing to the OLD path. Any idea on this ?
I would have to see the db directory before and after and the exact restore commands to guess.
Hi Ember,
Thanks for your quick response
Here are the steps that I have taken
In db2inst1 instance I have 4 databases DBRD1 , DBRD2 , DBRD3 , DBRD4 each of this database is having its own database path defined with same name /DBRD1 , /DBRD2 , /DBRD3 , /DBRD4 (the path of DMS tablespaces is /DBRD1_data1 , /DBRD1_data2 so on) . when I took backup for this databases I was confident enough that the databases will be restored on exactly the same path . The backup command issued was ‘db2 backup database DBRD1 to /offline_backups compress” and so on for other databases.
All these four databases were having their own active log path defined in the DB cfg as /DBRD1/Active_Logs , /DBRD2/Active_Logs , /DBRD3/Active_Logs and /DBRD4/Active_Logs
In db2inst2 instance I have 1 databases DRNDS which was having its own database path /DRNDS . The backup command issued was ‘db2 backup database DRNDS to /offline_backups compress” . Active log path defined for this database was /DRNDS /Active_Logs
During the activity when I had to restore the database from the above taken backups, I dropped the existing databases and issued following commands
db2 “restore db DBRD1 from /offline_backups taken at 20160523232436 without rolling forward”
db2 “restore db DBRD2 from /offline_backups taken at 20160523232446 without rolling forward”
db2 “restore db DBRD3 from /offline_backups taken at 20160523232416 without rolling forward”
db2 “restore db DBRD4 from /offline_backups taken at 20160523232406 without rolling forward”
When the databases were restored,
1) All DMS tablespaces were right on the path that they were before the restoration.
2) DBPATH which was now on /home/db2inst1/
3) Active log path was now on /home/db2inst1/NODE0000/NODE0000/SQL00001/SQLOGDIR/ , /home/db2inst1/NODE0000/NODE0000/SQL00002/SQLOGDIR/ , /home/db2inst1/NODE0000/NODE0000/SQL00003/SQLOGDIR/, /home/db2inst1/NODE0000/NODE0000/SQL00004/SQLOGDIR/
4) SMS tablespaces path changed to /home/db2inst1/
In the second instance db2inst2 similar restore command was used
db2 “restore db DRNDS from /offline_backups taken at 20160523222416 without rolling forward”
When the restore command completed
1) DMS tablespaces were right on the path that they were before the restoration.
2) DB path was now on /home/db2inst2
3) Active log directory was pointing to the previous path only /DRNDS /Active_Logs
4) SMS tablespaces path was changed to /home/db2inst2
Pease let me know why was the behavior different in these two cases. And by default does SMS tablespaces take the path of DBPATH. ?
So the only difference between the two instances on restore was #3 – the active log path, correct?
Exactly as you’ve described it, I’m not sure why it would happen that way. were the DB2 versions for the two instances the same?
Did you also drop DRNDS before restoring it, or did you do a restore with replace existing?
My suspicion is that there is some difference somewhere in the configuration or processes. I would expect the dbpath to change to the default when the database is dropped before restoring into it. This would mean that tablespaces defined on the database path would also be changed (it just happens that these are only the SMS tablespaces). I’m fairly sure that the transaction log path would also change if it had not been altered away from the default. My suspicion therefore is that for the 4 databases in db2inst1, the active log path changes had not taken effect for some reason, but were going to the default location. But honestly I would have to test it to verify this. I’m in the habit of very explicitly specifying what I want on restore (using the generate script option to generate a base script to modify), so I’m not as familiar with the defaults as I used to be. I also nearly always restore into an existing database when I have that option.
So the only difference between the two instances on restore was #3 – the active log path, correct?
Ans: Yes the only difference for both the instances was active log path.
Exactly as you’ve described it, I’m not sure why it would happen that way. were the DB2 versions for the two instances the same?
Ans: Both the instances are running on same version (DB2 ESE V10.1 Fixpack 3),Sorry by mistake I have mentioned it in the first post as V 9.7 .
Did you also drop DRNDS before restoring it, or did you do a restore with replace existing?
Ans: Yes I dropped the existing database before proceeding with a restore.
I too have learned a lesson from this to never go with the default values, always its better to generate a redirect script no matter you are doing a redirected restore or not. Also I think I need to stick to “replace existing” if I have an option so that all the paths will similar to the ones that are defined in the existing database.Thanks a lot for your insights.
Ember Crooks-Can U provide me video for db2 online backup &Restore in alternate server with different database
I don’t have that, but LUW Academy might. https://www.youtube.com/user/DB2LUWAcademy
Hi Ember Crooks
I am an SAP System Admin. We have SAP Application running on top of DB2 10.1. This is regarding the DB2 restore process for DB2 database backed up solely on filesystem level:
What we are currently backing up is as below:
1) All DB2 directories once a week. (we issue db2 write suspend before taking the filesystem backup). Lets say this happens every Sunday night.
2) Daily we take backup of archivelogs directory only.
This is done since our database is in terrabytes and we have insufficient tapes to store all DB directories every day.
My question is : Is it possible to do a point in time recovery using the current method? Suppose I want to restore the DB on a Wednesday, can I restore all the file systems as of Sunday and apply archivelogs from Sunday to Wednesday?
If we can do it, what would be the sequence of steps required? And if our assumption is wrong then what do you suggest we should backup on a weekly and daily basis for a successful recovery. We have a constraint from the client that the backup will happen at file system level only and not via db2 native backup tool.
Request your expert advise on the above scenario.
Thanks
Javed
You may be able to do rollforward with the use of the db2inidb command to place the database in rollforward pending status. You’d want to thoroughly test this scenario, but logically, it should work.