DBClean – staglog

This is part of a series focusing on the details of each item that needs to be pruned in a Commerce database.


What is it?

Staglog is a table in the staging database that records changes in data. Staging triggers on all stagingprop tables write to it when there is a change to any of those tables. Staglog reads from it and updates the stgprocessed column as it processes changes. Data is not usually removed by any other process, so this table can get big if not pruned properly and frequently.

Potential for impact

This is most likely to impact how fast stagingprop runs and how fast you can query the table to troubleshoot stagingprop problems. Both important, but it would have to be really out of control before affecting normal operations.

Where this Applies

This is one of the few areas that only applies in a staging database. You may have a test staging database in your setup, and it would apply there as well.

Table (other tables may be involved in cascading deletes)



note: always test for yourself, don’t take my word for it

note also that you should be cautious running the SQL during peak loads, and always use “with ur”

Number of rows to be deleted:

select count(*) from staglog where stgprocessed = 1 and stgstmp < current timestamp - <days to retain> days with ur

Pruning method

You can use dbclean to prune this one. Object=staglog and Type=obsolete

If you wanted to prune it manually, you could, keeping in mind all the gotchas for big deletes – you’d probably want to break it up. You don’t want to run it as one statement, but it would be:

delete from staglog where stgprocessed = 1 and stgstmp < current timestamp - <days to retain> days

Determining Retention

Retention time for this depends primarily on how frequently you run stagingprop. This pruning will never delete rows that haven’t been propped, but generally, you want to keep the processed rows around at least until the next stagingprop so you have them for reference if need be. Frequency for running stagingprop varies greatly by client – some do it nightly, some do it weekly, some do it only on demand, and some do it on differing schedules for differing filters. So go with at least 2X your least frequent schedule, and you should be OK. This is something to be decided by the engineers/admins/developers and rarely has any visibility up to the business level.

Indexes to help deletion

You may want to add an index to the orderitems table on stgprocessed, stgstmp and (if you’re using it to split up deletes) stgrfnbr

So, good information? More detail needed in any areas? Let me know what you think.

Ember Crooks
Ember Crooks

Ember is always curious and thrives on change. She has built internationally recognized expertise in IBM Db2, spent a year working with high-volume MySQL, and is now learning Snowflake. Ember shares both posts about her core skill sets and her journey learning Snowflake.

Ember lives in Denver and work from home

Articles: 556


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.