One of the most important areas for tuning DB2 subsystem performance is memory usage. DB2 for z/OS uses memory for buffer pools, the EDM pool, RID pool and sort pools to cache data and structures in memory. The better memory is allocated to these structures, the better DB2 will perform.
When allocating DB2 buffer pools, keep these rules of thumb in mind:
- Don’t allocate everything to a single buffer pool (e.g., BP0); use a multiple buffer pool strategy.
- Explicitly specify a buffer pool for every table space and index (instead of allowing DB2 to default to a buffer pool selection).
- Isolate the DB2 Catalog into BP0, BP8K0, BP16K0, and BP32K (depending upon the table space size); put user and application DB2 objects into other buffer pools.
- Separate indexes from table spaces with each in their own dedicated buffer pools.
- Consider isolating heavily hit data into its own buffer pool to better control performance.
- Consider isolating sorts into a single buffer pool and tuning for mostly sequential access (e.g. BP7).
- Consider separating DB2 objects into separate buffer pools that have been configured for sequential verses random access.
Forget about trying to follow a cookie-cutter approach to buffer pool management. Every shop must create and optimize a buffer pool strategy for its own data and application mix. You can use the ALTER BUFFERPOOL command with the VPSIZE parameter to control the size of each bufferpool. Additionally, you can specify VPSIZEMIN and VPSIZEMAX with the AUTOSIZE parameter to allow DB2 to work with WLM to automatically adjust the size of bufferpools based on actual workload.
Furthermore, DB2 offers the following buffer pool tuning “knobs” that can be used to configure buffer pools to the type of processing they support:
- DWQT –this value is the deferred write threshold; it is expressed as a percentage of the virtual buffer pool that might be occupied by unavailable pages. When this threshold is reached DB2 will start to schedule write I/Os to externalize data. The default is 50%, which is likely to be too high for most shops.
- VDWQT – this value is the vertical deferred write threshold; it is basically the same as DWQT, but for individual data sets. The default is 10%, which once again is quite likely to be too high for many shops.
- VPSEQT – this value is the sequential steal threshold; it is a expressed as a percentage of the virtual buffer pool that can be occupied by sequentially accessed pages. Tune buffer pools for sequential access such as scans and sorting by modifying VPSEQT to a larger value. The default is 80%.
- VPPSEQT – this value is the sequential steal threshold for parallel operations; the default value is 50%.
All of these parameters can be changed using the ALTER BUFFERPOOL command. Consider modifying the deferred write threshold parameters to enable trickle write; that means lower values that will cause changed data to be written to disk more frequently, instead of waiting for a system checkpoint to occur. Furthermore, look at modifying the sequential steal thresholds for the type of data being buffered; if that data is mostly sequentially accessed, then increase these thresholds… if the data is mostly randomly accessed, then decrease these thresholds. Of course, these are basic, high-level guidelines that you will need to study before adjusting at your shop.
The PGSTEAL parameter also can be adjusted to modify the manner in which the bufferpool steals pages when new data arrives and there is no space for it. There are three options: LRU, FIFO, and NONE. The typical option is LRU, or least recently used. This will cause the oldest pages (in terms of when they were last accessed) to be stolen before newer pages. An alternate approach is FIFO, or first in/first out. With this approach there is no need for DB2 to monitor when the data was last accessed to determine which are least-recently used. FIFO can reduce CPU usage (no LRU algorithm needed) and works well if the data is read once and never accessed again. The final option, NONE, is a special case to be used when a bufferpool is large enough to hold all of the data assigned to it so no page stealing is needed. When NONE is specified, DB2 will pre-load the bufferpool when the objects are opened, basically creating an in-memory area for the data. DB2
You can also use the PGFIX parameter to fix bufferpool pages in real storage. Doing so avoids the processing time that DB2 needs to fix and free pages for every I/O operation. This can reduce CPU for bufferpools involved in very intensive I/O applications.
In addition to buffer pools, DB2 uses memory for the EDM (Environmental Descriptor Management). This is for caching application related elements used by DB2 programs. There are multiple EDM pools and the internal structures cached include DBDs, SKCTs, CTs, SKPTs, and PTs. It also includes the authorization cache for plans and packages, as well as the cache for dynamic SQL mini-plans. We will not go into any detail of the EDM in this blog post, other than to say that by keeping these structures in memory the performance of your DB2 applications can be improved.
Overall, remember that buffer and EDM pool tuning are in-depth subjects that cannot be completely covered in-depth in a high-level blog post such as this. So, study those IBM DB2 manuals – and learn by doing. Additionally, there is much more to proper DB2 system performance tuning than memory tuning. Other system elements requiring attention include allied agent setup (CICS, TSO, etc.), network configuration, locking, logging, and Parallel Sysplex configuration and management for DB2 data-sharing shops, and so on.
This blog was originally published on Craig Mullins’ blog: http://db2portal.blogspot.com/2009/04/basic-db2-buffering-and-memory.html
For additional resources for DBAs please download Craig Mullins’ latest white paper, “The Many Different Types of DBAs.”
EPM applications help measure the business performance. This post will help you choose the best EPM solutions for your organization’s needs and objectives.
Imagine there are over one hundred logins in the source server and you need to migrate them all over to the destination server. Wouldn’t it be awesome if we could automate the process by generating the scripts for the required tasks?