Not long ago, a Db2 for z/OS DBA sent to me, via a colleague of mine, a question. He described in an email the procedure that his team regularly used to consolidate extents, when the number of these got into double digits for a Db2 table space-related data set, back down to one. He noted that this extent-consolidation procedure was more time-consuming and CPU-intensive than desired, and he wanted to know if I had any suggestions for making the procedure more efficient. In fact, I did have a suggestion for improving the efficiency of the Db2 data set extent consolidation procedure used at this person's site. My suggestion: STOP DOING THAT.
It might have been the comedian Henny Youngman who'd get laughs with this joke: "I told my doctor, 'It hurts when I do this.' He told me, 'Stop doing that.'" In all seriousness, Henny (or whoever it was) had an important point there. When some procedure causes pain in the form of CPU consumption and/or labor intensity, the best way to take that pain down to zero is to dispense with said procedure. In the context of Db2 data set extent-reduction efforts, my "Stop doing that" suggestion might engender this response: "But, if we dispense with our Db2 data set extent consolidation procedure, we'll end up with Db2 data sets that have a lot of extents!" My response to that response: "So?"
Here's the deal, folks: extents matter WAY less for Db2 for z/OS data sets than they did a long time ago (like, back in the 1990s). Way back when, a real concern about a Db2 table space data set going into a lot of extents was the impact this could have on prefetch read performance. Such a negative extent effect could in fact occur because - again, way back when - Db2 prefetch read operations were satisfied from spinning disk. Disk controller cache memory sizes were so small back in the day that Db2 would bypass cache for prefetch reads, and those multi-page reads from spinning disk could suffer, performance-wise, if the data set holding the page set or partition being accessed (table space or index) had gone into a good number of extents.
Things are different now, in a number of ways:
- Db2 subsystem buffer pool configurations are MUCH larger than they were some years ago, owing largely to 1) mainframe memory getting less expensive all the time (on a per-gigabyte basis), leading organizations to load up on z/OS real storage (often to the tune of several hundred GB for a production z/OS LPAR); and 2) people realizing that if you give Db2 a lot of memory (e.g., for larger buffer pools), it generally performs really well. Much larger buffer pool configurations mean that a much higher percentage of Db2 page requests (synchronous and asynchronous) are satisfied from pages in memory, as opposed to requiring disk subsystem read I/O operations. Obviously, when page requests are satisfied from pages in memory, data set extents on disk are irrelevant.
- Disk controller cache memory sizes have been really big for a long time; and, that large disk controller cache memory resource is managed in a high-performing way by powerful microprocessors that are an integral part of modern enterprise disk subsystems. What these large and intelligently managed disk controller cache resources mean is that a read request (synchronous or asynchronous) that cannot be satisfied from the buffer pool configuration will often result in a read from disk controller cache, as opposed to requiring a read from spinning disk. As is true for a read request that is satisfied from data in memory (in a buffer pool), data set extents are not relevant for a read of data from disk controller cache.
- Even when a Db2 read request leads to accessing data all the way back on spinning disk, the architecture of modern enterprise disk subsystems - primarily RAID in nature - reduces the performance impact of data set extents from what it once was.
Very interesting!
ReplyDeleteWe can review our triggers for reorgs after this article.
Also, even so a disk read is needed, nowadays is very common to have TB/PB in SSD disk subsystems – much faster than mechanical disks
True - even more reason to be less concerned about Db2 data set extents.
DeleteRobert
Hi Robert
ReplyDeleteHow about hitting the z/OS limit of 59 volume count? Recently we hit this limit when using SECQTY as -1. What is the best way to prevent or monitor whether we are hitting the 59 volume count for a particular DB2 dataset?
I'm not going to be able to help with the monitoring question - I'm not an expert on checking the number of volume across which a data set has spread. I'd recommend checking with a mainframe storage systems administrator about that.
DeleteAs for what you could do to make hitting the 59-volume limit less likely for a Db2 for z/OS data set, I have a couple of thoughts:
1) Don't over-utilize disk volumes. It might seem to make sense, from a cost-efficiency perspective, to use as much of the space on disk volumes as possible ("We paid for this disk space - we're going to use every bit of it"); however, that mind-set can lead to operational problems. One is the risk of causing a data set to spread over too many volumes - a result of having too little free space, on average, on the disk volumes. Another risk: you may have so little free space on disk volumes that online REORG becomes effectively impossible for larger database objects (I've seen that at a few Db2 for z/OS sites). Folks at one Db2 site told me that their mainframe storage management team aims for 60-80% utilization of disk space in the production environment. Is that "wasting" space? Not in the opinion of that team - it was a decision made to maximize operational flexibility and reduce risk from a mainframe disk storage perspective.
2) Be careful about DSSIZE for Db2 table spaces. For a partition-by-range table space with relative page numbering in effect, DSSIZE can be as large as 1024 GB. Keep in mind that the larger the DSSIZE value, the greater the number of volumes across which the associated data set is likely to spread. If disk volume utilization levels are really high at your site, you may need to go with a larger number of smaller data sets for a table space, versus a smaller number of larger data sets.
Robert