As a DB2 for z/OS system grows and evolves, the value of certain ZPARM parameters should be reviewed to ensure that they are still appropriate in the context of the current environment. Failure to do this can result in a situation in which the operational efficiency (or some other characteristic) of the system is negatively impacted. One of these ZPARMs that should be periodically revisited is DSMAX.
DSMAX specifies the maximum number of data sets (these would be associated with table spaces and indexes) that can be open and allocated to DB2 at one time. For a long time, the maximum value you could specify for DSMAX was 10,000. That changed, about 15 years ago, with DB2 Version 6, which allowed the DSMAX specification to be as high as 32,767. The upper bound of DSMAX was further increased, to 65,041, with DB2 Version 8. It went to 100,000 with DB2 9; and to 200,000 with DB2 10. The default value of DSMAX has also steadily increased: it was 3000 with DB2 Version 7; went to 9960 with DB2 Version 8; and to 20,000 with DB2 10.
The rise in maximum and default DSMAX values was warranted by the increase in the number of data sets associated with a typical DB2 system, and THAT phenomenon owes to a number of factors, including growth over time in the size of DB2 for z/OS-managed databases, more table spaces for a given number of tables (a universal table space can hold one and only one table), increasing use of DB2 for z/OS as a database management system for vendor-supplied applications (several of which are characterized by databases that contain a great many objects), more tablespace partitions (DB2 Version 8 introduced table-controlled partitioning, which allows a table to have up to 4096 partitions, versus a minimum of 254 partitions for an index-controlled partitioned table space), and more index partitions (DB2 Version 8 table-controlled partitioning enabled partitioning of more than one index on a partitioned table).
Here's why you should periodically check your DSMAX value and its effect on a DB2 subsystem's operational efficiency: when the number of data sets open and allocated to a DB2 subsystem reaches 99% of that subsystem's DSMAX specification, DB2 will physically close some open data sets (300 of them, or 3% of DSMAX, whichever is smaller). Which data sets will be closed? Well, DB2 will first look for data sets belonging to objects defined with CLOSE YES, and will select some of those for closing on a least-recently-used basis. If these data set close operations do not sufficiently reduce the number of open data sets, DB2 will close data sets belonging to objects defined with CLOSE NO, again making selections on a least-recently-used basis. Some level of this data set close activity is typically not a problem, but too much can negatively impact system performance. I've seen a number of situations in which an overly-high level of DB2 data set close activity has been overlooked (this can lead to more data set open activity than you'd like, as data sets closed for DSMAX reasons have to be re-opened when next accessed).
Where can you go to get a handle on data set close activity in your DB2 environment? That's easy: just get your hands on a DB2-monitor-generated statistics long report (ideally, one that captures activity for a particularly busy hour of the day) or an online display of DB2 subsystem statistics, and locate the information pertaining to "open/close activity." There you will find a number of useful fields. Among these is the high-water mark for the number of open DB2 data sets. If you see there a figure significantly larger than the DSMAX value for the DB2 subsystem, that's telling you something. More important, take a look at the field labeled DS CLOSED-THRESH REACHED (or something similar -- different DB2 monitor products can have slightly different labels for the same field). That shows you the number of times that DB2 data sets were physically closed due to the DSMAX threshold being reached. If that figure indicates a data set close rate of more than 1 per second (e.g., if you see that data sets were closed due to the DSMAX threshold being reached more than 3600 times in a one-hour elapsed time interval), my recommendation would be to increase the value of DSMAX.
How high should the DSMAX value be for a DB2 subsystem? Well, a value high enough to get the rate of data set close operations to the single digits per minute (or even to zero) is nice, but there is such a thing as too high; see, the increases in DSMAX maximum and default values that I mentioned previously were enabled largely by z/OS enhancements that reduced the amount of below-the-line (referring to the 16 MB level) virtual storage required for each open data set. The below-the-line virtual storage needed for an open data set, while smaller than before, is not zero. The 200,000 maximum DSMAX value allowed in a DB2 11 system might best be thought of as a theoretical limit -- the actual maximum value that would work in your environment would likely be considerably less than 200,000. If your current DSMAX value is resulting in a higher-than-desired rate of data set close operations, consider bumping the value up by 1 or 2 thousand to see if that gets the close rate down to where you want it to be. Generally speaking, I'd say that a value of up to 30,000 for DSMAX should not be problematic. If you want to go beyond that figure, you probably first ought to get a good idea as to the availability of virtual storage below the 16 MB line for the DB2 system of interest. More information on calculating an appropriate DSMAX value can be found in the DB2 for z/OS Installation and Migration Guide.
To sum up the message of this blog entry, I'd urge you to first check, using your DB2 monitor (I particularly like to use monitor-generated statistics long reports) the rate of data set close activity resulting from the DSMAX threshold being reached for particular DB2 subsystems at your site. If that close rate exceeds 1 per second, bump the value of DSMAX up by something in the range of 1000 to 2000. If you feel a need to take DSMAX above 30,000, proceed with caution and with, ideally, good knowledge of the availability of below-the-line virtual storage in the target system.
Stay on top of this going forward. Don't let "data set creep" cause an overly high level of data set close activity for your DB2 subsystems.
No comments:
Post a Comment