Friday, March 29, 2024

Db2 for z/OS Data Sets: If You're Worrying About Extents, You Can Probably Stop Worrying

Not long ago, a Db2 for z/OS DBA sent to me, via a colleague of mine, a question. He described in an email the procedure that his team regularly used to consolidate extents, when the number of these got into double digits for a Db2 table space-related data set, back down to one. He noted that this extent-consolidation procedure was more time-consuming and CPU-intensive than desired, and he wanted to know if I had any suggestions for making the procedure more efficient. In fact, I did have a suggestion for improving the efficiency of the Db2 data set extent consolidation procedure used at this person's site. My suggestion: STOP DOING THAT.

It might have been the comedian Henny Youngman who'd get laughs with this joke: "I told my doctor, 'It hurts when I do this.' He told me, 'Stop doing that.'" In all seriousness, Henny (or whoever it was) had an important point there. When some procedure causes pain in the form of CPU consumption and/or labor intensity, the best way to take that pain down to zero is to dispense with said procedure. In the context of Db2 data set extent-reduction efforts, my "Stop doing that" suggestion might engender this response: "But, if we dispense with our Db2 data set extent consolidation procedure, we'll end up with Db2 data sets that have a lot of extents!" My response to that response: "So?"

Here's the deal, folks: extents matter WAY less for Db2 for z/OS data sets than they did a long time ago (like, back in the 1990s). Way back when, a real concern about a Db2 table space data set going into a lot of extents was the impact this could have on prefetch read performance. Such a negative extent effect could in fact occur because - again, way back when - Db2 prefetch read operations were satisfied from spinning disk. Disk controller cache memory sizes were so small back in the day that Db2 would bypass cache for prefetch reads, and those multi-page reads from spinning disk could suffer, performance-wise, if the data set holding the page set or partition being accessed (table space or index) had gone into a good number of extents.

Things are different now, in a number of ways:

  • Db2 subsystem buffer pool configurations are MUCH larger than they were some years ago, owing largely to 1) mainframe memory getting less expensive all the time (on a per-gigabyte basis), leading organizations to load up on z/OS real storage (often to the tune of several hundred GB for a production z/OS LPAR); and 2) people realizing that if you give Db2 a lot of memory (e.g., for larger buffer pools), it generally performs really well. Much larger buffer pool configurations mean that a much higher percentage of Db2 page requests (synchronous and asynchronous) are satisfied from pages in memory, as opposed to requiring disk subsystem read I/O operations. Obviously, when page requests are satisfied from pages in memory, data set extents on disk are irrelevant.
  • Disk controller cache memory sizes have been really big for a long time; and, that large disk controller cache memory resource is managed in a high-performing way by powerful microprocessors that are an integral part of modern enterprise disk subsystems. What these large and intelligently managed disk controller cache resources mean is that a read request (synchronous or asynchronous) that cannot be satisfied from the buffer pool configuration will often result in a read from disk controller cache, as opposed to requiring a read from spinning disk. As is true for a read request that is satisfied from data in memory (in a buffer pool), data set extents are not relevant for a read of data from disk controller cache.
  • Even when a Db2 read request leads to accessing data all the way back on spinning disk, the architecture of modern enterprise disk subsystems - primarily RAID in nature - reduces the performance impact of data set extents from what it once was.
So, that's point #1: Db2 for z/OS data set extents just don't matter, from a performance perspective, as they once did. This point is underscored by the way in which Db2-managed secondary space allocation (the use of which I recommend) works. How do you get Db2 to manage secondary space allocation for data sets? You can do that by NOT including a SECQTY specification in a CREATE TABLESPACE or CREATE INDEX statement. For an existing table space or index, you can alter the object with a specification of SECQTY -1 to tell Db2 that you want it to manage secondary disk space allocation for the object. When Db2 manages secondary space allocation for a table space or index data set, it does so using what's called a "sliding scale" algorithm, which causes subsequent secondary space allocation quantities to be larger than those previously requested for the data set. If you check out the description of the sliding scale algorithm in the Db2 for z/OS documentation, you'll see the following (underlining added by me for emphasis): "The first 127 extents are allocated in increasing size, and the remaining extents..." Question for you: if extents were problematic from a performance perspective, would Db2's own secondary space allocation algorithm take you to 127 extents and beyond, as needed? Answer: NO. If Db2 doesn't care about this, should you? Again, NO.

"But wait," you might say, "Even if data set extents aren't a performance concern in a Db2 environment, there's a z/OS data set extent limit, right? If we hit that and Db2 can't extend a data set, application processes inserting rows into a table could fail, right?" True, but the data set extent limit is a lot bigger than it used to be. Back in the day, it was 251, and indeed that number might have me glancing in the rearview mirror with some of that "objects are closer than they may appear" anxiety. But quite some time ago - with z/OS 1.7 - the extent limit for a data set went to 7257 (when the Extent Constraint Removal option is set to YES in the SMS data class to which the data set belongs). When you let Db2 manage secondary space allocation for a table space or index data set, you are virtually assured that the data set will be able to reach its maximum size before it hits the extent limit.

Oh, and here's a fun fact: there is an EXTENTS column in the SYSIBM.SYSTABLESPACE and SYSIBM.SYSINDEXSPACE real-time statistics tables in the Db2 catalog. That column long had the SMALLINT data type, which can accommodate values of up to 32,767. When the Db2 catalog goes to the V13R1M501 level, the data type of the EXTENTS column changes to INTEGER - a type that can accommodate values of up to about 2.1 billion. I'd say this reflects an expectation that the z/OS data set limit is not going to stay at 7257 for the long haul.

So, would I ever be concerned with the number of extents to which a Db2 for z/OS table space or index data set has gone? I'd say that an extent value that's below 200 for a data set would not concern me. Above 200? Maybe, though not in a "front-burner" kind of way. If I saw that a Db2 data set had reached a number of extents greater than 200, I might be inclined to reduce that number at least somewhat, probably by going to a larger PRIQTY value for the object and executing an online REORG to put the change into effect. Again, though, this would not be a "crisis response" action - more like a Db2 housekeeping task.

Bottom line: if you've been spending your time and mainframe CPU time in being aggressive in keeping extent values low for Db2 data sets, my recommendation would be to ease up on that, because you can. Spend your time (and your mainframe's cycles) on more valuable tasks, like helping to get Db2-based applications designed (or enhanced) and deployed. That's where you'll make a bigger and more positive difference for your organization.

4 comments:

  1. Very interesting!
    We can review our triggers for reorgs after this article.
    Also, even so a disk read is needed, nowadays is very common to have TB/PB in SSD disk subsystems – much faster than mechanical disks

    ReplyDelete
    Replies
    1. True - even more reason to be less concerned about Db2 data set extents.

      Robert

      Delete
  2. Hi Robert
    How about hitting the z/OS limit of 59 volume count? Recently we hit this limit when using SECQTY as -1. What is the best way to prevent or monitor whether we are hitting the 59 volume count for a particular DB2 dataset?

    ReplyDelete
    Replies
    1. I'm not going to be able to help with the monitoring question - I'm not an expert on checking the number of volume across which a data set has spread. I'd recommend checking with a mainframe storage systems administrator about that.

      As for what you could do to make hitting the 59-volume limit less likely for a Db2 for z/OS data set, I have a couple of thoughts:

      1) Don't over-utilize disk volumes. It might seem to make sense, from a cost-efficiency perspective, to use as much of the space on disk volumes as possible ("We paid for this disk space - we're going to use every bit of it"); however, that mind-set can lead to operational problems. One is the risk of causing a data set to spread over too many volumes - a result of having too little free space, on average, on the disk volumes. Another risk: you may have so little free space on disk volumes that online REORG becomes effectively impossible for larger database objects (I've seen that at a few Db2 for z/OS sites). Folks at one Db2 site told me that their mainframe storage management team aims for 60-80% utilization of disk space in the production environment. Is that "wasting" space? Not in the opinion of that team - it was a decision made to maximize operational flexibility and reduce risk from a mainframe disk storage perspective.

      2) Be careful about DSSIZE for Db2 table spaces. For a partition-by-range table space with relative page numbering in effect, DSSIZE can be as large as 1024 GB. Keep in mind that the larger the DSSIZE value, the greater the number of volumes across which the associated data set is likely to spread. If disk volume utilization levels are really high at your site, you may need to go with a larger number of smaller data sets for a table space, versus a smaller number of larger data sets.

      Robert

      Delete