Wednesday, June 26, 2024

Db2 for z/OS: Getting From Function Level 500 to 510 in a Db2 12 Environment

It's pretty well known that getting the activated function level of a Db2 12 for z/OS system to V12R1M510 is required prior to migrating that Db2 12 system to Db2 13 (I blogged about that a couple of years ago). At present (June, 2024), there are still a good number of sites running Db2 12 for z/OS with an activated function level of V12R1M500. I sometimes get, from people at such sites, a question like this: "How should we go from Db2 12 function level 500 to function level 510?" Generally speaking, what these folks are wanting is for me to opine on going straight from Db2 12 function level 500 to function level 510, versus getting from FL500 to FL510 by first going to some "in-between" function level and then going to 510. In this blog entry, I'll tell you what I tell people who ask me this "getting from A to B" question.

Right off the bat, I'll say that I'd be comfortable with either approach - in other words, there is not a "right" and a "wrong" way to go from Db2 12 FL500 to FL510. Let's consider first the "one fell swoop" approach of going straight from Db2 12 FL500 to FL510. What is required, in a technical sense, to do this? The code level of the Db2 12 system (which you can check using the output of the Db2 command -DISPLAY GROUP) has to be 121510. The catalog level (also check-able with -DISPLAY GROUP) has to be V12R1M509 (there is no 510 catalog level; and, you can use the CATMAINT utility to take the catalog level to V12R1M509 if it is not already there). Additionally, you can't have any packages that have been executed within the past 1.5 years that were last bound or rebound prior to Db2 11 (this "no pre-Db2 11 still-in-use packages" requirement is explained in the aforementioned blog entry about preparing to migrate from Db2 12 to Db2 13).

Let's say those technical prerequisites are in-place. Given that you could, in that case, go straight from Db2 12 FL500 to FL510, why wouldn't you? What would hold you back? In my experience, the chief concern is often something like this: "If we make the big jump from Db2 12 FL500 to FL510, I'm afraid that will mess up some of our Db2-accessing applications."

That concern, though understandable, is essentially unfounded if you don't change package APPLCOMPAT values when making the big jump in the activated function level of the Db2 12 system. I've written plenty about APPLCOMPAT, including a part 1 and part 2 entry in this blog. The important thing to remember about APPLCOMPAT, in relation to a change in the activated function level of a Db2 system, is this: a package's APPLCOMPAT value can serve to insulate an associated application program from SQL behavioral changes that could be introduced via a change to a higher activated function level in a Db2 for z/OS environment. Let's unpack that statement. A SQL behavioral change - referred to in the Db2 for z/OS documentation as a "SQL incompatibility" - can be succinctly described thusly: same SQL, same data, different result. That might sound kind of alarming. Here's the two-pronged good news: 1) SQL incompatibilities are quite few and far between (they are documented whenever they are introduced by a new Db2 for z/OS version or function level), and 2) they tend to affect either very few or none of an organization's Db2-accessing applications (they are usually associated with highly specialized, often unusual scenarios).

Now, while it's highly unlikely that any of an organization's Db2-accessing programs would be negatively impacted by a SQL incompatibility associated with a newer Db2 function level, I can't say that the risk of application impact related to a change in activated Db2 function level is zero if applications are exposed to SQL behavioral changes. That's where APPLCOMPAT comes in. See, if the Db2 package used by an application program has an APPLCOMPAT value of (for example) V12R1M100, SQL behavior for that program will be that of a Db2 12 system with function level 100 activated, even if the Db2 environment in which the program is executing has an activated function level of V12R1M510 (or V13R1M100 or some other Db2 13 function level, after the system has been migrated to Db2 13). That is how APPLCOMPAT insulates a Db2 application program from SQL behavioral changes that might be introduced when a higher function level is activated for the Db2 system in which the program's SQL statements execute. So, make a jump in activated function level from 500 to 510 in your Db2 environment, and leave package APPLCOMPAT values as they are, and your Db2 application programs will be (conceptually speaking) unaware of the environmental change.

OK, so I'd be plenty comfortable in taking a Db2 12 system's activated function level from 500 right to 510, and I hope you'd be similarly comfortable with such a move. You might find, however, that "selling" the powers that be on a plan for getting to Db2 12 FL510 from FL500 would be easier if that plan included a "rest stop" at a function level between 500 and 510. If that were the case, you could go that route and not lose any honor in my eyes; and, yours wouldn't be the first Db2-using organization to take the 2-step approach for getting from Db2 12 FL500 to FL510. I can tell you that a pretty popular intermediate stop on the way from Db2 12 FL500 to FL510 is FL505. This is true for a couple of reasons: stopping for a while (for however long makes people comfortable: a few weeks, a couple of months - whatever) right between FL500 and FL510 appeals to some folks. Additionally, Db2 12 FL505 makes available to you one of my favorite Db2 12 features: rebind phase-in.

Alright, to sum up: if your organization has been sitting at Db2 12 FL500 for a long time, and you want to get to FL510 so you can go from there to Db2 13, consider taking one of two routes from A to B:

  • Straight through - If your code level and catalog level and no-still-in-use-pre-Db2-11-packages ducks are all in a row, go from Db2 12 FL500 straight to FL510, keeping in mind that leaving package APPLCOMPAT values as they are is a way to insulate Db2-accessing programs from any impact related to the Db2 environmental change.
  • Go part way, sit a spell, then go the rest of the way. As noted, Db2 12 FL505 is a popular rest stop en route from FL500 to FL510.
The main point is, GET THIS DONE. Db2 12 goes out of service after December 31, 2025. You still have plenty of time to get to Db2 12 FL510 and then on to Db2 13, but if you sit back too long you could end up feeling a little anxious about the whole thing. Who needs that? I hope information in this blog entry will help you get your organization's Db2 12-to-13 show on the road.

Thursday, May 30, 2024

Db2 for z/OS: Really Big Buffer Pools are Great, but Don't Stop There

 Back in 2018, I reviewed an organization's production Db2 for z/OS environment, and saw at that time the largest z/OS LPAR real storage size I'd ever seen: 1100 GB. The Db2 subsystem running in that LPAR had (not surprisingly) the largest buffer pool configuration I'd ever seen: 879 GB (referring to the aggregate size of all of the Db2 subsystem's buffer pools). Within that buffer pool configuration was (again, not surprising) the lagest single Db2 buffer pool I'd ever seen: approximately 262 GB (66,500,000 4K buffers).

Those "biggest I've ever seen" figures sustained that status until just recently, when I reviewed another production Db2 for z/OS system. The new "biggest I've ever seen" numbers are way larger than the previous maximums:

  • A z/OS LPAR with 2.4 terabytes of real storage
  • A Db2 subsystem with 1.7 terabytes of buffer pool resources.
  • An individual buffer pool of approximately 560 GB in size (140,000,000 4K buffers)
What I saw is great, and it's the future, folks. That humongous buffer pool configuration is eliminating a humongous number of read I/Os (synchronous and asynchronous), and that, in turn, boosts CPU efficiency and throughput for the Db2-accessing application workload. z/OS LPAR real storage sizes are getting bigger and bigger, in part because mainframe memory keeps getting less expensive on a per-GB basis, and in part because z/OS can utilize ever-larger amounts of real storage (z/OS 3.1 supports up to 16 TB of memory for one LPAR - though with the way memory is currently packaged on an IBM Z server, you're advised to limit the size of one LPAR's real storage resource to 10 TB), and in part because loading up on real storage does not increase the cost of software that runs in a z/OS system (the cost of that software is based on mainframe general-purpose CPU utilization - not real storage size).

Organizations that run production Db2 subsystems in z/OS LPARs that have large amounts of real storage are increasingly going with really large Db2 buffer pool configurations. That's a smart thing to do. Db2 allows for up to 16 TB of buffer pool space for a single subsystem, and going big is a definite plus for Db2 workload performance and CPU efficiency (just be sure that you don't over-burden a z/OS LPAR's real storage resource: you want to keep the LPAR's demand paging rate - available via an RMF Summary report - below 1 per second).

Here's the point I want to make with this blog entry: while really big buffer pools are great for Db2 application performance, don't focus solely on the size of a Db2 subsystem's buffer pool configuration. In addition to going big, take other steps to maximize the positive performance benefits of a large Db2 buffer pool configuration:

  • Use large real storage page frames for the busier buffer pools - In my view, a buffer pool's GETPAGE rate (obtainable from a Db2 monitor-generated statistics long report) is the best indicator of a pool's busy-ness. My recommendation is to use large real storage page frames for every pool that has a GETPAGE rate (during a busy hour of the processing day) in excess of 1000 per second. Large real storage page frames enhance CPU efficiency for page access by reducing the CPU cost of translating virtual storage addresses to real storage addresses. Some related things to note:
    • Page-fixing of buffers in a pool is a prerequisite for using large real storage page frames - This is done for a pool via a specification (in an -ALTER BUFFERPOOL command) of PGFIX(YES). Note that actually changing from not-page-fixed to page-fixed requires deallocation and reallocation of a pool - this usually happens as a consequence of recycling (i.e., stopping and then restarting) the associated Db2 subsystem. That deallocation and reallocation is also required to go from the default real storage frame size of 4K to a larger frame size.
    • Use the appropriate real storage frame size - For a buffer pool that is at least 20 GB in size (a little over 5,000,000 buffers, if we're talking about a 4K pool), I recommend the use of 2 GB page frames (this is accomplished via a specification of FRAMESIZE(2G) in an -ALTER BUFFERPOOL command). For a pool that is smaller than 20 GB, I recommend the use of 1 MB page frames (FRAMESIZE(1M)). Note that if a pool is defined with PGSTEAL(NONE) (see below for information about such pools, which are known as "contiguous" pools), 2 GB frames cannot be used - you need to go with FRAMESIZE(1M) for a PGSTEAL(NONE) pool, unless objects assigned to the pool are quite small (e.g., less than 100 4K pages), in which case the default 4K frame size would be appropriate (for a PGSTEAL(NONE) pool, Db2 will use a given large real storage page frame for buffers that hold pages of one object assigned to the pool).
    • Set the value of the z/OS parameter LFAREA appropriately - LFAREA, a parameter in the IEASYSnn member of the z/OS data set named SYS1.PARMLIB, specifies the amount of a z/OS LPAR's real storage resource that is to be managed in 1 MB (and, possibly, 2 GB) frames. I generally like to see an LFAREA specification that provides enough 1 MB (and 2 GB, when appropriate) frames to fully back FRAMESIZE(1M) (and, if relevant, FRAMESIZE(2G)) pools, but not much more than that. Why not much more? Because a lot of processes in a z/OS system can only use 4 KB page frames. If you think you might want to later enlarge some Db2 buffer pools that have a FRAMESIZE(1M) or a FRAMESIZE(2G) specification, you can make the LFAREA value for the frame size in question larger than you need at present - just don't go overboard with that.
    • Use the output of the Db2 command -DISPLAY BUFFERPOOL(ACTIVE) DETAIL to verify that pools that have a preferred frame size of 1M or 2G are fully backed by frames of the desired size - If you see, in the output of that command, that a pool with a VPSIZE of 100,000 buffers and a preferred frame size of 1M has 70,000 buffers allocated in 1M frames and 30,000 buffers allocated in 4K frames, it means that there aren't enough 1 MB frames in the system to fully back the pool. Note that for a pool that has a preferred frame size of 2G, you might see some buffers allocated in 1M frames, even if the system has enough 2 GB frames to "fully" back the pool. Why might that be the case? Well, in the interest of not wasting a lot of space in a 2 GB page frame, Db2 won't use such a frame if it can't fill at least about 95% of the frame with buffers. Suppose you have a pool that is about 41 GB in size. Db2 will use twenty 2 GB frames for that pool, and the remaining 1 GB of buffers will be allocated in 1 MB frames, As far as I'm concerned, that is not at all a big deal. No problem.

  • If you have any PGSTEAL(NONE) buffer pools, aim for zero activity in the overflow area of those pools - A PGSTEAL(NONE) buffer pool is also known as a "contiguous" buffer pool. Such a pool is intended to be used to completely cache in memory the objects assigned to the pool. A contiguous buffer pool can optimize efficiency for page access, but some of that advantage is lost if there is any activity in the pool's overflow area, which is where any buffer stealing would occur (if the pool were in fact not large enough to hold every page of every object assigned to the pool). Output of the Db2 command -DISPLAY BUFFERPOOL (ACTIVE) DETAIL will show if there is any activity in the overflow area of a PGSTEAL(NONE) pool. If you see that there is activity in the overflow area of a PGSTEAL(NONE) pool (unless it's a really small amount of activity), either take one or more objects out of the pool (by reassigning them to other pools) or make the pool larger (always keeping in mind that you want the demand paging rate of the z/OS LPAR to be less than 1 per second - don't over-burden the LPAR's real storage resource).
And there you have it. For a Db2 for z/OS buffer pool configuration, really big is really great, but take the time to go the extra mile by optimally configuring the pools in the configuration.

Monday, April 29, 2024

Db2 13 for z/OS: Utility Execution History

 A few months ago, I got this question from a Db2 for z/OS DBA: "Is there a way we can check to see if the UNLOAD utility has been executed for certain of our tables?" The environment in question was a Db2 12 for z/OS system, and because of that I had to provide an answer that was less-than-great from an ease-of-use persepective: "Yes, you can do that in a Db2 12 environment, but the process is going to be a bit cumbersome. You could set up a Db2 audit policy that would record utility execution. What that will do is cause Db2 to write certain trace records when utilities are executed. You'd need to use your Db2 monitor to format that trace information into a human-readable report (using IBM's OMEGAMON monitor for Db2 for z/OS, that would be by way of a Record Trace report); and, you'd have trace records written for EVERY utility execution during the reporting time interval - potentially, a lot of data to sift through; and, there would be trace records generated for every utility executed for ANY database object - not just the tables in which you're interested (more data to sift through)."

"Alternatively, you could set up a different audit policy that would limit trace output to the tables of interest, but that policy would cause Db2 to generate trace records for ALL operations performed involving that set of tables, whether associated with a utility or with an application process - again, potentially a lot of trace data to sift through."

Not the idea situation, but along comes Db2 13 for z/OS to make things better - in two steps.

Step 1: Once you've activated function level V13R1M501, you can take advantage of a new ZPARM parameter, UTILITY_HISTORY. The default value for that parameter is NONE - not because we (IBM) want to discourage you from using a new Db2 capability; rather, this is in keeping with the traditional approach for a new ZPARM: the initial default value preserves existing behavior. "Existing behavior" in this case is "no utility history," because utility history functionality was not available prior to Db2 13 function level 501. If you set UTILITY_HISTORY to UTILITY, something nice happens: every time a utility executes, Db2 will insert a row into the SYSIBM.SYSUTILITIES table (newly added when CATMAINT is executed to take the catalog to the V13R1M501 level). That row for a utility in SYSUTILITIES provides a lot of useful information, such as:

  • The name of the utility (e.g., UNLOAD, or COPY, or REORG, or ...)
  • The name of the job through which the utility was executed
  • The user ID that invoked the utility
  • The starting point of the utility, both timestamp-wise and logpoint-wise
  • The timestamp at which the utility completed processing
  • The elapsed time for the utility job
  • The general-purpose CPU time and zIIP CPU time for the utility job
  • The portion of CPU time (both general-purpose and zIIP) consumed with sort activity related to the utility job
  • The utility job's return code
  • Whether the utility job was terminated (and, if so, whether it was restarted)
Step 2: After you've activated function level V13R1M504, you can change the value of UTILITY_HISTORY in ZPARM to OBJECT. When you do that, execution of a utility will cause Db2 to insert into SYSUTILITIES a row with the useful informational items noted above, and in addition to that Db2 will insert into SYSIBM.SYSOBJEVENTS a row for each object (non-partitioned table space or index, or a partition of a table space or index) processed by the utility (SYSOBJEVENTS makes its debut when the catalog is taken to the V13R1M504 level).

[Note: prior to activation of function level V13R1M504, there is some capability to obtain information about the object(s) processed by a utility by matching SYSUTILITIES rows with SYSCOPY rows based on values in the EVENTID column which is found in both tables. I say, "some capability," because not every utility drives SYSCOPY insert activity - that's what makes SYSOBJEVENTS important.]

Back to the question, referenced at the start of this blog entry, about tracking UNLOAD activity for certain tables. In a Db2 13 environment with function level V13R1M504 (or higher) activated, and with UTILITIES_HISTORY set to OBJECT in ZPARM, this question can be answered simply by querying SYSUTILITIES with a WHERE NAME = 'UNLOAD' predicate, and joining with SYSOBJEVENTS on the EVENTID column to get the related table space names (and of course if you are wanting to track UNLOAD activity for particular tables you can have a WHERE clause for the SYSOBJEVENTS table that references the name of the table space or table spaces of interest).

[I recognize that UNLOAD can be executed at a table level, and that a table space could have multiple tables, but I'd hope that your sensitive-data tables are in single-table table spaces because 1) it's best to have tables in universal table spaces, and a universal table space is related to a single table; and 2) I think that data protection can be maximized when a sensitive-data table is isolated in its own table space.]

Needless to say, tracking UNLOAD activity is just one of many ways in which Db2 13 utility history data can be beneficially utilized. In section 8.1.7 of the IBM "redbook" titled, IBM Db2 13 for z/OS and More, you'll find examples of queries of SYSUTILITIES that can be used to gather all kinds of helpful information, such as:
  • All the utilities that executed between midnight and 8:00 AM this morning
  • All the utilities that are currently active or stopped
  • All the utilities that finished with a return code of 8 or higher
  • The top general-purpose CPU-consuming LOAD jobs over the past 7 days
  • And more...
Those sample queries can help you get started, and soon you and your teammates will come up with your own queries that yield valuable info on utility execution at your site (and keep in mind that the queries in the above-referenced redbook don't include SYSOBJEVENTS because that catalog table was not yet present when the redbook was written - you can of course extend the queries with joins to SYSOBJEVENTS).

This is a Db2 13 enhancement that I like a lot. I think you'll like it, too. Check it out.

Friday, March 29, 2024

Db2 for z/OS Data Sets: If You're Worrying About Extents, You Can Probably Stop Worrying

Not long ago, a Db2 for z/OS DBA sent to me, via a colleague of mine, a question. He described in an email the procedure that his team regularly used to consolidate extents, when the number of these got into double digits for a Db2 table space-related data set, back down to one. He noted that this extent-consolidation procedure was more time-consuming and CPU-intensive than desired, and he wanted to know if I had any suggestions for making the procedure more efficient. In fact, I did have a suggestion for improving the efficiency of the Db2 data set extent consolidation procedure used at this person's site. My suggestion: STOP DOING THAT.

It might have been the comedian Henny Youngman who'd get laughs with this joke: "I told my doctor, 'It hurts when I do this.' He told me, 'Stop doing that.'" In all seriousness, Henny (or whoever it was) had an important point there. When some procedure causes pain in the form of CPU consumption and/or labor intensity, the best way to take that pain down to zero is to dispense with said procedure. In the context of Db2 data set extent-reduction efforts, my "Stop doing that" suggestion might engender this response: "But, if we dispense with our Db2 data set extent consolidation procedure, we'll end up with Db2 data sets that have a lot of extents!" My response to that response: "So?"

Here's the deal, folks: extents matter WAY less for Db2 for z/OS data sets than they did a long time ago (like, back in the 1990s). Way back when, a real concern about a Db2 table space data set going into a lot of extents was the impact this could have on prefetch read performance. Such a negative extent effect could in fact occur because - again, way back when - Db2 prefetch read operations were satisfied from spinning disk. Disk controller cache memory sizes were so small back in the day that Db2 would bypass cache for prefetch reads, and those multi-page reads from spinning disk could suffer, performance-wise, if the data set holding the page set or partition being accessed (table space or index) had gone into a good number of extents.

Things are different now, in a number of ways:

  • Db2 subsystem buffer pool configurations are MUCH larger than they were some years ago, owing largely to 1) mainframe memory getting less expensive all the time (on a per-gigabyte basis), leading organizations to load up on z/OS real storage (often to the tune of several hundred GB for a production z/OS LPAR); and 2) people realizing that if you give Db2 a lot of memory (e.g., for larger buffer pools), it generally performs really well. Much larger buffer pool configurations mean that a much higher percentage of Db2 page requests (synchronous and asynchronous) are satisfied from pages in memory, as opposed to requiring disk subsystem read I/O operations. Obviously, when page requests are satisfied from pages in memory, data set extents on disk are irrelevant.
  • Disk controller cache memory sizes have been really big for a long time; and, that large disk controller cache memory resource is managed in a high-performing way by powerful microprocessors that are an integral part of modern enterprise disk subsystems. What these large and intelligently managed disk controller cache resources mean is that a read request (synchronous or asynchronous) that cannot be satisfied from the buffer pool configuration will often result in a read from disk controller cache, as opposed to requiring a read from spinning disk. As is true for a read request that is satisfied from data in memory (in a buffer pool), data set extents are not relevant for a read of data from disk controller cache.
  • Even when a Db2 read request leads to accessing data all the way back on spinning disk, the architecture of modern enterprise disk subsystems - primarily RAID in nature - reduces the performance impact of data set extents from what it once was.
So, that's point #1: Db2 for z/OS data set extents just don't matter, from a performance perspective, as they once did. This point is underscored by the way in which Db2-managed secondary space allocation (the use of which I recommend) works. How do you get Db2 to manage secondary space allocation for data sets? You can do that by NOT including a SECQTY specification in a CREATE TABLESPACE or CREATE INDEX statement. For an existing table space or index, you can alter the object with a specification of SECQTY -1 to tell Db2 that you want it to manage secondary disk space allocation for the object. When Db2 manages secondary space allocation for a table space or index data set, it does so using what's called a "sliding scale" algorithm, which causes subsequent secondary space allocation quantities to be larger than those previously requested for the data set. If you check out the description of the sliding scale algorithm in the Db2 for z/OS documentation, you'll see the following (underlining added by me for emphasis): "The first 127 extents are allocated in increasing size, and the remaining extents..." Question for you: if extents were problematic from a performance perspective, would Db2's own secondary space allocation algorithm take you to 127 extents and beyond, as needed? Answer: NO. If Db2 doesn't care about this, should you? Again, NO.

"But wait," you might say, "Even if data set extents aren't a performance concern in a Db2 environment, there's a z/OS data set extent limit, right? If we hit that and Db2 can't extend a data set, application processes inserting rows into a table could fail, right?" True, but the data set extent limit is a lot bigger than it used to be. Back in the day, it was 251, and indeed that number might have me glancing in the rearview mirror with some of that "objects are closer than they may appear" anxiety. But quite some time ago - with z/OS 1.7 - the extent limit for a data set went to 7257 (when the Extent Constraint Removal option is set to YES in the SMS data class to which the data set belongs). When you let Db2 manage secondary space allocation for a table space or index data set, you are virtually assured that the data set will be able to reach its maximum size before it hits the extent limit.

Oh, and here's a fun fact: there is an EXTENTS column in the SYSIBM.SYSTABLESPACE and SYSIBM.SYSINDEXSPACE real-time statistics tables in the Db2 catalog. That column long had the SMALLINT data type, which can accommodate values of up to 32,767. When the Db2 catalog goes to the V13R1M501 level, the data type of the EXTENTS column changes to INTEGER - a type that can accommodate values of up to about 2.1 billion. I'd say this reflects an expectation that the z/OS data set limit is not going to stay at 7257 for the long haul.

So, would I ever be concerned with the number of extents to which a Db2 for z/OS table space or index data set has gone? I'd say that an extent value that's below 200 for a data set would not concern me. Above 200? Maybe, though not in a "front-burner" kind of way. If I saw that a Db2 data set had reached a number of extents greater than 200, I might be inclined to reduce that number at least somewhat, probably by going to a larger PRIQTY value for the object and executing an online REORG to put the change into effect. Again, though, this would not be a "crisis response" action - more like a Db2 housekeeping task.

Bottom line: if you've been spending your time and mainframe CPU time in being aggressive in keeping extent values low for Db2 data sets, my recommendation would be to ease up on that, because you can. Spend your time (and your mainframe's cycles) on more valuable tasks, like helping to get Db2-based applications designed (or enhanced) and deployed. That's where you'll make a bigger and more positive difference for your organization.

Tuesday, February 27, 2024

Db2 13 for z/OS: Now You Can Dynamically Remove, as Well as Add, Active Log Data Sets

Db2 10 for z/OS (which came out back in 2010) provided a new capability related to management of a Db2 subsystem's active log data sets (known, along with the archive log data sets, as the subsystem's "log inventory"). The enhancement: the NEWLOG option of the Db2 command -SET LOG. With this added functionality, a Db2 for z/OS system administrator could add new active log data sets to a Db2 subsystem's log inventory, without having to bring the subsystem down (the system administrator would probably, in fact, add new pairs of active log data sets, as you always want to use dual logging to avoid a single point of failure for system and data recovery operations). Prior to this Db2 10 enhancement, adding active log data sets to a subsystem's log inventory could only be accomplished through execution of the DSNJU003 utility (also referred to as the "change log inventory" utility), and DSNJU003 can only be executed when the target Db2 subsystem is down.

The ability to dynamically add pairs of active log data sets to a Db2 subsystem's log inventory was welcomed by many Db2 people, and you can probably imagine why. A Db2 subsystem's active log data sets can be thought of, logically, as a ring of data sets around the Db2 subsystem. Suppose there are 20 pairs of active log data sets in this logical ring. Db2 writes information to pair #1, and when that pair of data sets is filled up then information is written to pair #2, and when that pair is filled up then information is written to pair #3, and so on around the ring. Meanwhile, not long after the filling up of active log data set pair #1, the information written to that pair of data sets will be copied to a pair of archive log data sets, and that action will make active log data set pair #1 reusable, so that new information can be written to that pair of data sets when Db2 comes back around the ring to them. The same archive operation is performed for other active log data set pairs after they have been filled up, making them reusable when their turn in the rotation comes up again to be the current active log data set pair.

All well and good - unless something goes wrong with the archive log write process. If filled-up active log data set pairs can't be archived, they can't be made reusable, and when Db2 has gone around the ring and comes back to the not-reusable active log data set pairs, logging will stop, and when logging stops just about everything stops. In a pre-Db2 10 environment, you could add active log data set pairs to a subsystem's log inventory to buy more time (by providing more space for logging) as you worked to fix whatever was impeding the log archiving process, but at the cost of stopping the Db2 subsystem in order to execute the DSNJU003 utility. Not good. Being able to buy extra fix-the-archiving-problem time by dynamically adding new pairs of active log data sets to a Db2 subsystem's log inventory, while the subsystem was still up and running, made for a much better situation.

Fast-forward to Db2 13 for z/OS, and now we get (once function level V13R1M500 has been activated) the ability to dynamically remove active log data set pairs, thanks to the new REMOVELOG option of the -SET LOG command. The value of dynamic (i.e., while the Db2 subsystem is up and running) removal of active log data set pairs is as a complement to the dynamic-add functionality we've had since Db2 10. Together, the NEWLOG and REMOVELOG options of the -SET LOG command provide a capability that can be very useful - namely, online replacement of a Db2 subsystem's active log data set pairs with better data set pairs.

"Better?" How so? Well, usually this will mean bigger and/or encrypted. Let's take the data set size case. Suppose you have a production Db2 subsystem that has 20 pairs of active log data sets, each data set being 2 GB in size. You're going through those active log data sets faster than you'd like - maybe filling up three or four (or more) pairs in an hour when the system is busy. You'd rather have active log data sets that are 8 GB apiece, versus 2 GB (Db2 12 for z/OS took the maximum size of an active log data set from 4 GB to 768 GB). Can you go from 2 GB active log data sets to 8 GB active log data sets without stopping the Db2 subsystem? With Db2 13, you can. Here's how that would work:

  1. You dynamically add 20 pairs of active log data sets that are sized at 8 GB apiece, using the NEWLOG option of the -SET LOG command (a Db2 subsystem can have up to 93 pairs of active log data sets).
  2. After the older and smaller active log data sets have been archived, dynamically remove them from the Db2 subsystem's log inventory via the new (with Db2 13) REMOVELOG option of the -SET LOG command.
Now you have 20 pairs of active log data sets, each sized at 8 GB, when before you had 20 pairs of active log data sets sized at 2 GB apiece, and in getting from A to B you never had to stop the Db2 subsystem.

The same approach could be used to go from 20 pairs (for example) of unencrypted active log data sets to 20 pairs of encrypted active log data sets in an online way (referring here to exploitation of the data set encryption feature of z/OS):

  1. Dynamically add 20 pairs of active log data sets with which an encryption key label was associated at data set creation time.
  2. When the older unencrypted data sets have been archived, dynamically remove them from the Db2 subsystem's log inventory.
In these example use cases, I've utilized the phrase, "when the older (smaller and/or unencrypted) actice log data sets have been archived, dynamically remove them." That suggests that trying to dynamically remove a not-yet-archived active log data set could be problematic. Do you need to worry about this? No. Why not? Because Db2 won't let you accidentally shoot yourself in the foot when using the REMOVELOG option of -SET LOG. Specifically:

  • Db2 won't let you remove an active log data set to which it is currently writing information.
  • Db2 won't let you remove a log data set in the pair that is next in line for the writing of log information.
  • Db2 won't let you remove an active log data set that has not been archived (i.e., an active log data set that is not in REUSABLE status).
  • Db2 won't let you remove an active log data set that is currently in use (for example, an active log data set that is being read by a RECOVER utility job).
If you try to dynamically remove an active log data set to which Db2 is currently writing, or one that is next in line for writing, or one that has not been archived (i.e., is not in the REUSABLE state), the -SET LOG command will fail with the message DSNJ391I. If the active log data set you're trying to dynamically remove does not have one of these characteristics but is currently in use by some process, that data set will be marked as REMOVAL PENDING, and message DSNJ393I will be issued. In that case, you can remove the data set from the log inventory by issuing -SET LOG with REMOVELOG again when the data set is no longer in use. Alternatively, if the Db2 subsystem is standalone in nature (as opposed to being a member of a Db2 data sharing group), the data set will be removed from the log inventory automatically when the subsystem is next recycled (in a data sharing environment, subsequent re-issuance of -SET LOG with the REMOVELOG option is required to remove a REMOVAL PENDING data set from the log inventory). Note that if an active log data set has been marked as REMOVAL PENDING, it will not be used again by Db2 for read or write purposes. Note also that information about an active log data set that is in REMOVAL PENDING status can be checked via the output of the Db2 command -DISPLAY LOG DETAIL (the DETAIL option was added with function level 500 of Db2 13). When you see, in the output of -DISPLAY LOG DETAIL, that an active log data set in REMOVAL PENDING status has 0 readers, you know that it is no longer in use and can be physically removed from the log inventory with another issuance of -SET LOG with REMOVELOG.

One more thing: I have been referring to removal of an active log data set "from the log inventory" of a Db2 subsystem. In the Db2 documentation, you'll see references to removal of an active log data set "from the BSDS" of a Db2 subsystem. The documentation is saying the same thing I'm saying. The BSDS - short for bootstrap data set - contains information about a Db2 subsystem's active and archive log data sets.

OK, there you have it. If you want to upgrade your active log data sets in one or more ways - maybe bigger than they are now, maybe encrypted versus unencrypted - then the REMOVELOG option of -SET LOG (thanks, Db2 13), together with the NEWLOG option (thanks, Db2 10) is your ticket for getting that done without having to stop the Db2 subsystem in question. Just another way that Db2 for z/OS enables you to take high availability higher than ever before.

Wednesday, January 31, 2024

Db2 for z/OS: Stop Making APPLCOMPAT in ZPARM More Important Than It Is

The APPLCOMPAT option of the Db2 for z/OS BIND and REBIND PACKAGE commands is really important - that's why I posted part 1 and part 2 blog entries on the topic back in 2019. The APPLCOMPAT parameter in ZPARM, on the other hand (referring to DSNZPARM, the data set that contains a Db2 subsystem's configuration parameter settings), is less important. I pointed this out in part 1 of the aforementioned two-part blog entry on APPLCOMPAT, but I still find that plenty of Db2 for z/OS people ascribe significance to the ZPARM parameter APPLCOMPAT that just doesn't jibe with reality. That being the case, I am writing this blog entry in the hope that it will help to drive home the point that the ZPARM parameter called APPLCOMPAT should (generally speaking) not be the main focus of your APPLCOMPAT-related concerns.

To illustrate the point that plenty of people continue to over-inflate the importance of the APPLCOMPAT parameter in ZPARM, I'll share with you a question that a Db2 for z/OS person sent to me by way of one of my colleagues. The question was, basically, "We are getting ready to activate Db2 12 function level V12R1M510 (a prerequisite for migration from Db2 12 to Db2 13 for z/OS). Can we be pretty much assured that doing this will not cause SQL behavioral changes if we leave the value of APPLCOMPAT in ZPARM unchanged (they had this ZPARM parameter set to V10R1)?" In responding to this question, I explained that in advancing a Db2 system's active function level, one can indeed protect application programs from the risk of SQL behavioral changes (I'll explain what that means in a moment), but, I noted, this SQL behavioral change protection is provided by the APPLCOMPAT package bind specification, NOT by the APPLCOMPAT parameter in ZPARM. You can take a Db2 system's active function level as high as you want, and that will not lead to application-affecting SQL behavioral changes as long as you don't change the APPLCOMPAT value of your applications' Db2 packages. The value of the APPLCOMPAT parameter in ZPARM is only somewhat relevant to this discussion.

OK, what's all this about "SQL behavioral changes?" The term refers to this situation: same SQL statement, same data, different result. You might think, "How could that happen?" Well, every now and then, the Db2 for z/OS development team decides that the behavior of a given SQL statement should change, for one reason or another (and it's well-considered - these changes are not effected lightly). That change can be introduced with a new version or a new function level of Db2. My favorite example of a Db2 for z/OS SQL behavioral change is one that happened with Db2 11. In a Db2 10 environment, you could use a SQL statement to cast an eight-byte store clock value (a time value that a program can obtain from the z/OS operating system) as a Db2 timestamp value. In a Db2 11 system, that same SQL statement - cast an eight-byte store clock value as a Db2 timestamp - would fail with a -180 SQL error code. Same SQL statement, same data, different result in a Db2 11 versus a Db2 10 environment.

Here's one reason I really like this example: how many programs do you have that need to cast an eight-byte store clock value as a Db2 timestamp? Probably none - and this is typically the case for Db2 SQL behavioral changes - they usually affect either zero or very few of an organization's Db2-accessing programs. Alright, but what if you did have programs that needed the Db2 10 behavior of the Db2 TIMESTAMP function? Would you have been up a creek when your Db2 system went from Db2 10 to Db2 11? No - you would have been fine in that case, because you could just bind the Db2 packages used by those programs with APPLCOMPAT(V10R1), and that would mean that the programs would execute with Db2 10 SQL behavior, and that would mean that those programs could cast an eight-byte store clock value as a Db2 timestamp. See - it's the APPLCOMPAT package bind specification that provides protection (when needed) from Db2 SQL behavioral changes.

[By the way, in the Db2 for z/OS documentation, what I have been calling "SQL behavioral changes" are referred to as "SQL incompatibilities." These are documented for each Db2 application compatibility level, going back to V10R1 (that's as far back as Db2 application compatibility goes).]

So, I said up front that the APPLCOMPAT parameter is ZPARM is not as important as the APPLCOMPAT specification for your Db2 packages. Does that mean that the ZPARM has no significance? No. What is the purpose of the APPLCOMPAT parameter in ZPARM? It's this: the ZPARM parameter provides the default value for a package's APPLCOMPAT setting when the BIND PACKAGE command is issued without an APPLCOMPAT specification. That's it. I tell people to think of APPLCOMPAT in ZPARM as being like a cubbyhole. A BIND PACKAGE command may be issued without an APPLCOMPAT specification. The package in question needs an APPLCOMPAT value. Where is Db2 going to get that value, when the value was not provided via the BIND PACKAGE command? Db2 in that case is going to look in the cubbyhole labeled APPLCOMPAT in ZPARM. In that cubbyhole is a piece of paper (figuratively speaking) on which (for example) V12R1M509 is written. OK, that will be the package's APPLCOMPAT value.

[You might wonder: what if REBIND PACKAGE is issued without an APPLCOMPAT specification? Will the rebound package in that case get the APPLCOMPAT value to which the ZPARM parameter APPLCOMPAT has been set? Probably not. Why not? Because it is very likely that a package being rebound already has an APPLCOMPAT value, and in that case if the REBIND PACKAGE command is issued without an APPLCOMPAT specification then the package's current APPLCOMPAT value will be retained. For REBIND PACKAGE, then, the APPLCOMPAT parameter in ZPARM is relevant only when the REBIND PACKAGE command is issued without an APPLCOMPAT specification and the package in question does not already have an APPLCOMPAT value (again, unlikely, though not impossible - you can check on this via a query of the SYSIBM.SYSPACKAGE catalog table, which has a column named APPLCOMPAT).]

Given that APPLCOMPAT in ZPARM simply provides the default value for APPLCOMPAT when BIND PACKAGE is issued without an APPLCOMPAT specification, what should the value of this ZPARM parameter be? There isn't a right or wrong answer to this question - it's up to you. Personally, I'd lean towards making the value of APPLCOMPAT in ZPARM as high as it can be, which would be equal to the currently active function level in a Db2 system. Why would I want that? Because APPLCOMPAT, in addition to providing protection (when needed) from Db2 SQL incompatibilities, also enables use of newer SQL syntax and functionality. If I have APPLCOMPAT in ZPARM set to, for example, V10R1, and BIND PACKAGE at my site is typically issued without an APPLCOMPAT specification, I am limiting application programmers to SQL syntax and functionality only up to the Db2 10 level - can't use newer built-in functions such as LISTAGG and PERCENTILE_CONT, can't use Db2 global variables, can't use Db2 arrays, can't use newer special registers such as CURRENT LOCK TIMEOUT, etc. Is that what you want? Sure, if a program using one of those newer SQL capabilities fails at bind time because of the default V10R1 APPLCOMPAT value, you can fix that problem by issuing BIND PACKAGE a second time with an APPLCOMPAT specification that is current enough to support the desired functionality, but again, is that what you want?

At some Db2 for z/OS sites, APPLCOMPAT in ZPARM is indeed set at V10R1. Why so low? One reason maybe the misunderstanding (to which I've referred) of the purpose of the ZPARM parameter. Alternatively, maybe APPLCOMPAT in ZPARM is set at V10R1 because of concern about BIND PACKAGE issued for programs that aren't net new but rather have had a bit of an SQL change (which would then require a new precompile and BIND in the case of a static SQL-issuing program, as opposed to a REBIND). A person might think, "What if there's an existing program with 20 static SQL statements, and a programmer changes just one of those statements? When there is a BIND PACKAGE (with ADD or REPLACE, as the case may be) for that program's Db2 package, and the BIND PACKAGE is issued without an APPLCOMPAT specification, I want a from-the-ZPARM-parameter default APPLCOMPAT value that will have the 19 SQL statements that weren't changed behaving as they always have." OK. I get that. Like I said, it's up to you. Just keep in mind that the risk of adverse impact on your programs from Db2 SQL incompatibilities is usually very low - these incompatibilities are relatively few and far between, and they tend to affect few if any of your Db2-accessing programs.

The main point I want to make is this: when you change the value of the APPLCOMPAT parameter in ZPARM, that action is not, in and of itself, going to cause Db2-accessing programs in your environment to suddenly start behaving differently. All you've done with the ZPARM parameter update is change the APPLCOMPAT value that a package will get if a BIND PACKAGE command is issued without an APPLCOMPAT specification. Consider the ZPARM value change in that light, and act accordingly.