Friday, August 29, 2014

DB2 10 for z/OS, 1 MB Page Frames, and the Number 6656

I recently received an interesting message from a mainframe DB2 person I've known for a long time. At his DB2 10 for z/OS site, 1 MB page frames had recently been made available on the System z servers, and my friend moved to leverage this resource via page-fixed DB2 buffer pools. Nothing unusual there. What's interesting is the information he saw in the output of -DISPLAY BUFFERPOOL commands subsequently entered for the page-fixed pools. I'll get to that strange-looking data in a moment, but first I'll provide a bit of background information for level-set purposes.

Fixing the buffers of a buffer pool in real storage so that they can't be paged out to auxiliary storage is an option that was introduced with DB2 V8 for z/OS -- I first blogged on the topic about six years ago, in an entry I posted to the blog that I maintained while working as an independent DB2 consultant (prior to rejoining IBM in 2010). In DB2 V8 and V9 environments, specifying PGFIX(YES) for a high-I/O buffer pool would improve a system's processing efficiency by reducing the CPU cost of I/O operations associated with the pool. That same benefit -- fewer CPU cycles consumed in reading and writing DB2 table space and index pages from and to disk (and from and to group buffer pools in a data sharing system) -- is realized as well in DB2 10 (and 11) systems when buffer pools are page-fixed, but another opportunity for CPU savings is added: starting with DB2 10, a buffer pool defined with PGFIX(YES) will be backed by 1 MB real storage page frames, versus traditional 4 KB frames, if 1 MB frames are available in the z/OS LPAR. The 1 MB page frames enhance performance by making translation of virtual storage addresses to real storage addresses more efficient (this because the larger frames lead to more "hits" in the translation lookaside buffer). How are 1 MB page frames made available in a z/OS LPAR? That's done by way of the LFAREA parameter in the IEASYSxx member of SYS1.PARMLIB (the ability to have some portion of a z/OS LPAR's memory resource managed in 1 MB frames was introduced with the IBM z10 mainframes and z/OS 1.10). With this exploitation by DB2 of 1 MB page frames, PGFIX(YES) can boost CPU efficiency even for buffer pools that have little in the way of I/O activity.

Back now to my friend the DB2 systems programmer. As I mentioned, he issued -DISPLAY BUFFERPOOL commands for his page-fixed pools to get some information on their allocation. Now, this wasn't the standard-issue -DISPLAY BUFFERPOOL command. What was issued was a particular (and not extensively documented) form of the command, shown below with BP0 used as an example:

-DISPLAY BUFFERPOOL(BP0) SERVICE(4)

When -DISPLAY BUFFERPOOL is used with that SERVICE(4) option in a DB2 10 environment (the story is different for DB2 11 -- I'll get to that later), the output will include DSNB999I messages showing the allocation of buffers in a pool with regard to the type of real storage frame backing the buffers (don't bother looking for DSNB999I in the DB2 10 Messages manual, because it's not there). For a 4K buffer pool with VPSIZE(9000) and PGFIX(YES), this is what my DB2 friend saw in the DSNB999I part of the -DISPLAY BUFFERPOOL output:

DSNB999I  -DBP1 DSNB1DBP  SERVICE( 4 )OUTPUT
DSNB999I  -DBP1 4K PAGES 2344
DSNB999I  -DBP1 1M PAGES 6656


That 6656 number looked odd to the DB2 systems programmer, and to me, as well. Why would DB2 allocate 6656 of the pool's buffers with 1 MB page frames, and the other 2344 buffers (a little over 9 MB) with 4 KB frames? It wasn't because of a shortage of 1 MB page frames -- there were plenty of those available in the LPAR (if there are not enough 1 MB frames to fully back a PGFIX(YES) buffer pool, DB2 10 or 11 will use what 1 MB frames there are and then use 4 KB frames for the rest of the pool's buffers). My friend looked at -DISPLAY BUFFERPOOL output for his other page-fixed pools, and he saw, in each case, that the number of buffers allocated with 1 MB page frames was a multiple of 6656 (and this was true regardless of the size of a pool's buffers -- he saw the same thing for 4K and  32K pools). The buffers beyond the multiple of 6656 for a page-fixed pool were allocated with 4 KB frames. So, a 4K pool with VPSIZE(30000) and PGFIX(YES) was seen to have 26,624 buffers (4 X 6656) allocated with 1 MB frames and 3376 buffers allocated with 4 KB frames. A 32K pool with VPSIZE(10000) and PGFIX(YES) had 6656 buffers allocated with 1 MB frames and 3344 buffers allocated with 4 KB frames.

In researching this strange-looking situation, the DB2 systems programmer and I came across a relatively recent APAR, PI12512, and that (plus a little more digging on my part) gave us our answer. DB2 uses control blocks to track buffers in a pool, and DB2 10 initially allocated control blocks for a PGFIX(YES) pool in such a way that it made sense to allocate the control blocks in 1 MB chunks and to back with 1 MB page frames the number of buffers in the pool covered by these 1 MB chunks of control blocks. It so happens that 1 MB of buffer pool control blocks can cover 6656 buffers, and that's why buffers in a PGFIX(YES) pool were allocated in multiples of 6656 with 1 MB page frames, with any buffers left over (i.e., any beyond the largest multiple of 6656 that was less than a pool's VPSIZE) backed by 4 KB frames.

When the fix for APAR PI12512 is applied to a DB2 10 subsystem, things work differently: buffers in a PGFIX(YES) pool are allocated in 1 MB chunks with 1 MB page frames, and any buffers left over, after as many 1 MB frames as possible have been filled, are allocated to 4 KB frames. The text of APAR PI12512 points up one advantage of this new arrangement: page-fixed buffer pools with a VPSIZE value of less than 6656 can be backed by 1 MB page frames. That is indeed a good thing, but there is a benefit for larger pools, as well. That benefit is best illustrated by example. Recall that my DB2 systems programmer friend saw, for a 4K pool with VPSIZE(9000) and PGFIX(YES), that 6656 buffers (about 74% of the total) were allocated with 1 MB page frames and 2344 buffers were allocated with 4 KB frames. If the fix for APAR PI12512 were applied to the DB2 system (and if the pool were deallocated and reallocated, which of course would be a consequence of the DB2 stop and restart required to activate the maintenance), for that same buffer pool one would expect to see 8960 buffers (99.5% of the total -- 35 chunks of 256 four kilobyte buffers) allocated with 1 MB frames, leaving only 40 buffers to be allocated with 4 KB frames. The greater the percentage of a page-fixed pool's buffers that can be allocated with 1 MB page frames, the better, in terms of performance, because the efficiency gained through decreased translation lookaside buffer misses will be maximized.

I should point out here that the change in the allocation of control blocks for page-fixed buffer pools that made possible the 1 MB page frame benefit maximization delivered by APAR PI12512 was accomplished through the fix for another APAR, PM85944 (and that fix was ultimately provided via APAR PM90486).

OK, now for the DB2 11 perspective. APAR PI12512 is a DB2 10-only APAR (and that's also the case for APARs PM85944 and PM90486). DB2 11 already has the functionality that these APARs provided for DB2 10; furthermore, with DB2 11 you don't have to issue -DISPLAY BUFFERPOOL with that secret-code-looking SERVICE(4) option to see the allocation of a pool's buffers with regard to different page frame sizes. And, with DB2 11 you won't see DSNB999I (the message you can't find in the DB2 for z/OS Messages manual) in -DISPLAY BUFFERPOOL output. Issue the simple and familiar command shown below in a DB2 11 environment (and here I'm using BP1 as an example):

-DISPLAY BUFFERPOOL(BP1)

and you'll see that the output includes something that looks like this:

DSNB546I  - PREFERRED FRAME SIZE 4K
        4000 BUFFERS USING 4K FRAME SIZE ALLOCATED


The "preferred frame size" indicates what was specified for the FRAMESIZE option of -ALTER BUFFERPOOL for the pool. That's a new option introduced with DB2 11 for z/OS. For a PGFIX(NO) pool, DB2 will automatically allocate the pool's buffers with 4 KB page frames. If there are 1 MB page frames available in a z/OS LPAR, and PGFIX(YES) is specified for a buffer pool, DB2 will automatically prefer 1 MB page frames for the pool (even so, in that case you might want to consider an explicit specification of FRAMESIZE(1M), just to be intentional about it). You can't use 2 GB frames for a buffer pool (new with DB2 11) unless the LFAREA specification provides some 2 GB frames in the z/OS LPAR, and the pool is defined with PGFIX(YES), and DB2 is running in a zEC12 server (you probably wouldn't see much, if any, performance benefit with a FRAMESIZE(2G) specification for a pool -- that configuration option is likely to be more advantageous when z/OS LPAR memory sizes are considerably larger than what is typically seen these days).

Below the "preferred frame size" line of the DSNB546I message (and that message can be found in the DB2 11 for z/OS Messages manual), you see information on the actual allocation of a pool's buffers with respect to different page frame sizes. There is one DSNB546I message for each different page frame size used in the allocation of a pool's buffers (in my example above, the buffer pool had a VPSIZE of 4000, and all of the pool's buffers were allocated with 4 KB page frames).

There you have it. That 6656 number (or a multiple thereof) will continue to show up in -DISPLAY BUFFERPOOL(BPn) SERVICE(4) output in DB2 10 for z/OS systems (for page-fixed pools in z/OS LPARs that have 1 MB page frames), until the fix for APAR PI12512 is applied. It won't show up in DB2 11 environments. Someday, we DB2 old-timers will reminisce about the strange days of buffers being allocated with 1 MB page frames in 6656-buffer chunks, same as we look back now on old and odd-seeming things like index subpages and hiperpools in expanded storage. And young whippersnappers will roll their eyes.

Monday, August 25, 2014

DB2 for z/OS: Can a Pooled DBAT Hold Locks?

The answer to the question in the title of this post is, "No, it can't -- unless it can." How's that for a hook?

OK, a little background. A DBAT is a database access thread. It's the kind of thread that a network-attached application process uses to interact with a DB2 for z/OS system (such a process, also referred to as a DRDA requester, issues SQL statements that get to DB2 by way of the distributed data facility, aka DDF). When a transaction using a "regular" DBAT completes, the DBAT goes into a pool so that it can be used in the execution of another transaction (a high-performance DBAT does not go into the DBAT pool upon transaction completion; instead, it remains dedicated to the connection through which it was instantiated until it has been reused 200 times). 

Last month, a DB2 for z/OS DBA asked me if a pooled DBAT can hold locks. Why did he ask that question? Because of some wording he saw in the DB2 for z/OS Installation and Migration Guide. The text to which he referred, in the description of the ZPARM parameter POOLINAC, is as follows (emphasis added by me):

"A database access thread in the pool counts as an active thread against MAX
REMOTE ACTIVE and can hold locks."


That "and can hold locks" bit looked funny to me. How could a pooled DBAT hold locks when a prerequisite for going into the pool is a "clean commit" of the transaction using the DBAT -- something that implies, among other things, a release of all locks owned by the thread? [Again, we're talking about "regular" -- not high-performance -- DBATs here, and that means RELEASE(COMMIT) packages, and that means all locks are released by a clean commit.]

I ran this by a colleague in the DB2 for z/OS development organization, and he said that the "and can hold locks" wording in the Installation and Migration Guide is probably outdated -- a leftover from the days, relatively early in the life of DB2 for z/OS V6, when RELEASE(DEALLOCATE) was honored for packages executed via DBATs. Of course, with that behavior in effect, a transaction using a DBAT could end with a "clean commit" (i.e., with no "hanging resources" such as WITH HOLD cursors, not-dropped declared global temporary tables, and held LOB locators) -- thereby allowing the DBAT to be pooled -- and still the thread could hold table space locks because those are retained until thread deallocation when RELEASE(DEALLOCATE) is in effect for a package. In fact, this aspect of RELEASE(DEALLOCATE) is a key reason why a DB2 V6 APAR was created to make DB2 stop honoring that bind option for packages executed via DBATs: DBATs can last a long time, and having table space locks retained for such long periods of time could lead to contention problems in the system. With the fix to this DB2 V6 APAR applied, DB2 treated a RELEASE(DEALLOCATE) package executed via a DBAT as though it had been bound with RELEASE(COMMIT).

So, I was right: a pooled DBAT can't hold locks, because only non-high-performance DBATs can be pooled, and non-high-performance DBATs are associated only with RELEASE(COMMIT) packages, and with a RELEASE(COMMIT) package you get a release of all locks when a transaction completes with a clean commit (something that has to happen before a DBAT can be pooled).

Here's the deal, though: I was right only briefly. Just a short time ago, a fix for DB2 APAR PI20352 came out, and with that fix applied in a DB2 10 or 11 for z/OS system, a pooled DBAT can hold locks. How so? Pretty simple: PI20352 allows a high-performance DBAT to be pooled, and a high-performance DBAT can hold locks (specifically, table space locks) after a transaction using the thread completes with a clean commit, because that's what happens for a RELEASE(DEALLOCATE) package, and a high-performance DBAT will be associated with at least one RELEASE(DEALLOCATE) package -- instantiation of a high-performance DBAT depends on a DBAT being used in the execution of such a package.

But why would you want a high-performance DBAT to be pooled? What's wrong with the original high-performance DBAT behavior, which eschews pooling in favor of staying dedicated to the instantiating connection for 200 units of work? Well, that behavior is fine when DDF-using applications stay connected to a DB2 for z/OS system for long periods of time, but it can be sub-optimal for environments in which applications stay connected to DB2 for shorter periods of time. That situation means more disconnect and reconnect activity. Prior to APAR PI20352, if an application connected to DB2 and executed a RELEASE(DEALLOCATE) package -- thereby instantiating a high-performance DBAT -- and then disconnected before the high-performance DBAT had been reused 200 times, the high-performance DBAT would have to be terminated because it couldn't be pooled. That would end up causing a good bit of DBAT create and terminate activity, and that is CPU overhead you'd rather avoid.

With the fix for APAR PI20352 applied, when an application using a high-performance DBAT disconnects from DB2 before the thread has been used for 200 units of work, the high-performance DBAT will be pooled rather than terminated. The pooled high-performance DBAT can then be used by any DDF connection needing a thread to process a request. Once it's used to service a request from a connection, the high-performance DBAT that had been pooled will do as high-performance DBATs do: it will stay dedicated to that connection so that it can be reused (a CPU efficiency booster when paired with RELEASE(DEALLOCATE) packages). Note that a particular high-performance DBAT can be used by a maximum of 200 units of work. That re-use limit is not reset when a thread is pooled, so if a high performance DBAT has been used 50 times and the associated application disconnects from DB2, resulting in the pooling of the DBAT, it can be reused 150 more times before being terminated (that termination, which is normal for high-performance DBATs, is done to free up resources allocated to the DBAT).

This is good stuff, but you might be thinking of a potential fly in the ointment: what if POOLINAC (the ZPARM that specifies the amount of time that a pooled DBAT can sit around waiting to be used -- exceeding that time results in DBAT termination) at a DB2 site has been set to zero, which would disable termination of pooled DBATs due to inactivity? Couldn't that lead to pooled high-performance DBATs sitting around indefinitely, holding table space locks (and holding RELEASE(DEALLOCATE) packages)? As it turns out, this is not something about which you need to be concerned. The DB2 developers anticipated that situation, and when the fix for APAR PI20352 is applied, a POOLINAC setting of 0 will be changed to the default POOLINAC value of 120 seconds.

So, to return to the question in this post's title: can a pooled DBAT hold locks? The answer depends on whether or not you've applied to fix to APAR PI20352 to your system. If you haven't applied that fix, the answer to the question is, "no." If you have applied that fix, the answer is, "yes," but it's a qualified yes. Only a pooled high-performance DBAT can hold locks; and these will only be table space-level locks (which are almost always of the non-exclusive "intent" variety), versus page or row locks. Keep in mind that it's good that a high-performance DBAT can be pooled if the application using it disconnects from DB2 before the thread has been used by 200 transactions (this cuts down on DBAT termination and creation activity that would otherwise occur). And remember the POOLINAC reset that will change a specification of 0 to the default of 120 seconds -- a nice enhancement that will keep people from shooting themselves in the foot.

Pooled DBATs have long been a mainstay of DB2 client-server efficiency and scalability. With APAR PI20352, high-performance DBATs get their invitation to the DBAT pool party. Seems appropriate that this was a summertime APAR (speaking from a northern hemisphere perspective).