I frequently get e-mails with DB2 questions and I plan to start posting answers to some of the more common ones up here.
One issue that comes up a lot is dealing with locking issues. Usually it is posed by someone who is experiencing timeouts in an online environment and they want to know how to minimize them. Here is some guidance.
When you experience a timeout, it means that another process holds a lock on the data that you are trying to modify. So, it stands to reason that you should try to minimize the duration of locks that are being held in your system. There are some approaches you can take to achieve this, but for the most part, they require programming changes.
First of all, make sure that all of your batch processes — especially any that run during the same timeframe, but really all batch process — have a COMMIT strategy. This means that your programs should issue a COMMIT after processing “a set number of” inserts, updates, and deletes. A COMMIT will tell DB2 to make the changes permanent and releases locks.
One good approach is to set a counter that is incremented after every modification. Then, check it and when it exceeds a predefined threshold — say 25 or 50 or 100 modifications — then issue a COMMIT. You can make the threshold an input parameter so that you can change it as the workload in your system changes. For example, make it 25 when concurrent activity is high, but ramp it up to 100 or higher when it is low. Failure to issue COMMITs will result in timeouts, as well as possibly deadlocks and lock escalation.
(Also, please note that these are just sample numbers and not necessarily the correct numbers to start with at your shop.)
An alternate approach is to issue a COMMIT after a pre-determined amount of time regardless of the number of modifications that have been issued. Either technique can work well, but again, consider a parameter-driven approach (in this case, where the time is input as number of seconds between COMMITs).
Next, look at all of your programs, batch and online, and try to change the code so that you are issuing the data modification statements to as close to the COMMIT as possible. By saving the data modification until right before the COMMIT, you reduce the overall average lock duration. This can result in reducing contention and therefore, the number of timeouts.
If you want to investigate the timeout details, be sure to examine the statistics trace class 3 and IFCID 0196 for timeouts (IFCID 0172 is for deadlocks).
By paying attention to how you code your DB2 programs, and by issuing periodic COMMIT statements as you make changes to your data, you can minimize… or even eliminate your lock contention problems.
EPM applications help measure the business performance. This post will help you choose the best EPM solutions for your organization’s needs and objectives.
It’s 2015 and you can now establish totally respectable MS SQL DBA credibility just by mentioning you have been in the game since SQL Server version 9. You may even get the same gasps of shock from some colleagues that used to be reserved for the version 6 veterans.