Not long ago, I was reviewing an organization's production DB2 for z/OS environment, and I saw something I very much like to see: a REALLY BIG buffer pool configuration. In fact, it was the biggest buffer pool configuration I'd ever seen for a single DB2 subsystem: 162 GB (that's the combined size of all the buffer pools allocated for the subsystem). Is that irresponsibly large -- so large as to negatively impact other work in the system by putting undue pressure on the z/OS LPAR's central storage resource? No. A great big buffer pool configuration is fine if the associated z/OS LPAR has a lot of memory, and the LPAR in question here was plenty big in that regard, having 290 GB of memory. The 128 GB of memory beyond the DB2 buffer pool configuration size easily accommodated other application and subsystem memory needs within the LPAR, as evidenced by the fact that the LPAR's demand paging rate was seen, in a z/OS monitor report, to be zero throughout the day and night (I'll point out that the DB2 subsystem with the great big buffer pool configuration is the only one of any size running in its LPAR -- if multiple DB2 subsystems in the LPAR had very large buffer pool configurations, real storage could be considerably stressed).
A couple of details pertaining to this very large buffer pool configuration were particularly interesting to me: 1) the total read I/O rate for each individual buffer pool (total synchronous reads plus total asynchronous reads, per second) was really low (below 100 per second for all pools, and below 10 per second for all but one of the pools), and 2) every one of the buffer pools was defined with PGFIX(YES), indicating that the buffers were fixed in real storage (i.e., not subject to being paged out by z/OS). And here's the deal: BECAUSE the buffer pools all had very low total read I/O rates, page-fixing the buffers in memory was doing little to improve the CPU efficiency of the DB2 subsystem's application workload. Why? Because all of the pools were exclusively using 4K page frames.
Consider how it is that page-fixing buffer pools reduces the CPU cost of DB2 data access. When the PGFIX(YES) option of -ALTER BUFFERPOOL was introduced with DB2 Version 8 for z/OS, the ONLY CPU efficiency gain it offered was cheaper I/O operations. Reads and writes, whether involving disk volumes or -- in the case of a DB2 data sharing configuration on a Parallel Sysplex -- coupling facilities, previously had to be bracketed by page-fix and page-release actions, performed by z/OS, so that the buffer (or buffers) involved would not be paged out in the midst of the I/O operation. With PGFIX(YES) in effect for a buffer pool, those I/O-bracketing page-fix and page-release requests are not required (because the buffers are already fixed in memory), and that means reduced instruction pathlength for DB2 reads and writes (whether synchronous or asynchronous).
DB2 10 extended the CPU efficiency benefits of page-fixed buffer pools via support for 1 MB page frames. By default, in a DB2 10 (or 11) environment, a PGFIX(YES) buffer pool will be backed by 1 MB page frames if these large frames are available in the LPAR in which the DB2 subsystem runs. How does the use of 1 MB page frames save CPU cycles? By improving the hit ratio in the translation lookaside buffer, leading to more cost-effective translation of virtual storage addresses to corresponding real storage addresses for buffer pool-accessing operations. DB2 11 super-sized this concept by allowing one to request, via the new FRAMESIZE option for the -ALTER BUFFERPOOL command, that a page-fixed pool be backed by 2 GB page frames (note that 2 GB page frames may not save much more CPU than 1 MB frames, unless the size of the buffer pool with which they are used is 20 GB or more).
Having described the two potential CPU-saving benefits of page-fixed buffer pools, I can make the central point of this blog entry: if you have a PGFIX(YES) buffer pool that has a low total read I/O rate, and that pool is backed by 4 KB page frames, the PGFIX(YES) specification is not doing you much good because the low read I/O rate makes cheaper I/Os less important, and the 4 KB page frames preclude savings from more-efficient virtual-to-real address translation.
This being the case, I hope you'll agree that it's important to know whether a page-fixed buffer pool with a low read I/O rate is backed by large page frames. In a DB2 11 environment, that is very easy to do: just issue the command -DISPLAY BUFFERPOOL, for an individual pool or all of a subsystem's buffer pools (in that latter case, I generally recommend issuing the command in the form -DISPLAY BUFFERPOOL(ACTIVE)). You'll see in the output for a given pool one or more instances of a message, DSNB546I. That message information might look like this:
DSNB546I - PREFERRED FRAME SIZE 1M
0 BUFFERS USING 1M FRAME SIZE ALLOCATED
DSNB546I - PREFERRED FRAME SIZE 1M
10000 BUFFERS USING 4K FRAME SIZE ALLOCATED
What would this information tell you? It would tell you that DB2 wanted this pool to be backed with 1 MB page frames (the default preference for a PGFIX(YES) pool), but the pool ended up using only 4 KB frames. Why? Because there weren't 1 MB frames available to back the pool (more on this momentarily). What you'd rather see, for a PGFIX(YES) pool that is smaller than 2 GB (or a pool larger than 2 GB for which 2 GB page frames have not been requested), is something like this:
DSNB546I - PREFERRED FRAME SIZE 1M
43000 BUFFERS USING 1M FRAME SIZE ALLOCATED
(This information is also available in a DB2 10 environment, though in a somewhat convoluted way as described in an entry I posted to this blog a couple of years ago.)
So, what if you saw that a PGFIX(YES) pool is backed only by 4 KB page frames, and not by the preferred larger frames (which, as noted above, are VERY much preferred for a pool that has a low total read I/O rate)? Time then for a chat with your friendly z/OS systems programmer. That person could tell you if the LPAR has been set up to have some portion of the real storage resource managed in 1 MB (and maybe also 2 GB) page frames. Large frames are made available by way of the LFAREA parameter of the IEASYSxx member of the z/OS data set SYS1.PARMLIB. Ideally, the LFAREA specification for a z/OS LPAR should provide 1 MB page frame-managed space sufficient to allow PGFIX(YES) buffer pools to be backed to the fullest extent possible by 1 MB frames (and/or by 2 GB frames as desired). It may be that DB2 is the one major user of large real storage page frames in a z/OS LPAR, and if that is the case then the amount of 1 MB (and maybe 2 GB) page frame-managed space could reasonably be set at just the amount needed to back page-fixed DB2 buffer pools (in the case of 1 MB frames, I'd determine the amount needed to back PGFIX(YES) buffer pools, and increases that by about 5% to cover some smaller-scale uses of these frames in a z/OS environment). If WebSphere Application Server (WAS) is running in the same z/OS LPAR as DB2, keep in mind that WAS can use 1 MB page frames for Java heap memory -- your z/OS systems programmer should take that into account when determining the LFAREA specification for the system.
There you have it. To maximize the CPU efficiency advantages of page-fixed buffer pools, make sure they are backed by large page frames. This is particularly true for pools with a low total read I/O rate. The more active a buffer pool is (and the GETPAGE rate is a good measure of activity -- it can be thousands per second for a buffer pool), the greater the CPU cost reduction effect delivered by large page frames.
And don't go crazy with this. Don't have a buffer pool configuration that's 80% of an LPAR's memory resource, and all page-fixed. That would likely lead to a high level of demand paging, and that would be bad for overall system performance. Know your system's demand paging rate, and strive to keep it in the low single digits per second or less, even during times of peak application activity. Leveraging z Systems memory for better performance is a good thing, but like many good things, it can be overdone.
Hi Robert. What about SCM/Flash memory?ReplyDelete
Can I define a pageable, insane size, BP and get benefits from SCM i/o response time?
I apologize for taking so long to respond.ReplyDelete
I assume by "SCM" that you're referring to Storage Class Memory, an advanced form of solid-state memory technology that promises to be very fast, persistent, and cost-competitive.
At this time, my thinking is that I would like to avoid paging in a z/OS production environment (or at least to keep a z/OS LPAR's demand paging rate from increasing beyond the low single digits per second). This would be my preference even if my auxiliary storage tier (i.e., my page data sets) utilized solid-state technology with very rapid service time for requests). I still think that you'd be better off with a zero (or close to it) demand paging rate; so, I'd size DB2 buffer pools accordingly (not so large as to drive the z/OS LPAR's demand paging rate above the low single digits per second). Might I change my mind on this matter someday? Perhaps, but I'm not changing it now.
We are looking at altering our bigger bufferpools to PAGEFIX YES (plenty of real storage available and system demand paging minimal) and debating frame size 1Mb or 2Gb. BP sizes vary from 2Gb to 8Gb and I/O rates are low (max'es out at several hundred per second). Reading your blog I think that frame size 1Mb is more appropriate (still need to make the required LFAREA config changes), but grateful for any opinion.
-DISPLAY BUFFERPOOL however shows;
DSNB546I -DB2P PREFERRED FRAME SIZE 4K
2000000 BUFFERS USING 4K FRAME SIZE ALLOCATED
Why would DB2 'prefer' a 4Kb frame size under these circumstances? Does it depend on the system availability of larger frames for DB2 to consider, and hopefully express a preference for, a larger frame size?
Thanks and regards.
Your statement that "We are looking at altering our bigger bufferpools to PAGEFIX(YES)" indicates that they are now PGFIX(NO). Large page frames will NEVER be the preferred frame size for a PGFIX(NO) buffer pool, because large page frames CANNOT be used for a PGFIX(NO) pool.Delete
For a PGFIX(YES) pool, 1 MB will be the preferred frame size by default. If you explicitly request 2 GB page frames, DB2 will use those for a PGFIX(YES) pool that is larger than 2 GB in size.
If you want to keep things simple, go with 1 MB frames for all of your PGFIX(YES) pools. Using 2 GB frames for PGFIX(YES) pools that are 2 GB or larger in size won't hurt anything, though performance will likely be similar to what you see for 1 MB frames, until you get up to a size of 20 GB or more for an individual buffer pool, at which point 2 GB frames should start to show significant CPU savings versus 1 MB frames.
If 2 GB frames deliver performance very similar to that provided by 1 MB frames for pools smaller than 20 GB in size, is there any reason to use 2 GB frames for pools between 2 GB and 20 GB in size? Yes. I've seen 2 GB frames used for such pools, for this reason: the DB2 team has decided that for their really big buffer pools, size will be adjusted upwards in increments of 2 GB (or multiples of 2 GB), and using 2 GB frames for those pools gets them into that "big chunk" mind-set. So buffer pool BP4, for example, sized at 524288 buffers (the number, I believe, that exactly fits in one 2 GB frame) and using one 2 GB frame, will be increased (if increased) by 524288 buffers (one 2 GB frame's worth of 4 KB buffers), or by a multiple of 524288 buffers, if it is enlarged at all.
Many thanks for that Robert.ReplyDelete
Hi Robert. We just implemented LFAREA=2048M with an ipl over the weekend and I defined two new bufferpools set to PGFIX(YES) and FRAME(1M) prior to the ipl so these bufferpools are ready to go. From your blog and other articles I've read, I'm confused as to whether hi I/O tablespaces/indexes should be put into these pools or low I/O rate tablespaces/indexes. I/Os and getpages aren't the same thing necessarily. I thought high I/O objects would be best suited to use large page frames to get the biggest cpu reduction but your blog says low I/O rate. Can you clarify please? I just want to know what would best benefit from a large page frame bufferpool. ThanksReplyDelete
High-I/O table spaces and/or indexes are always a good match for PGFIX(YES) buffer pools, because page-fixing buffers in memory makes I/Os cheaper, CPU-wise. When a PGFIX(YES) buffer pool is ALSO a large-frame buffer pool (and it doesn't have to be - a pool backed by 4-KB page frames could be page-fixed), it is a good pool for high-GETPAGE table spaces and/or indexes, whether I/O activity for the objects is high or low, because large page frames reduce the CPU cost of accessing pages cached in the pool. For a PGFIX(YES) pool defined with FRAMESIZE(1M) or FRAMESIZE(2G), the greatest performance benefit is likely to be seen for objects that have high levels of BOTH I/O and GETPAGE activity. I just don't want people to think that the CPU savings stop there. If you assign objects with high levels of both I/O and GETPAGE activity to a page-fixed, large-frame pool, and the pool is large enough to accommodate still more objects, assign high-GETPAGE objects to the pool, even if they get little in the way of I/O activity.
Hope that makes sense.
We are going to implementing PGFIX(YES) and FRAME(1M) in the most of bufferpool
and in the bigger bufferpools than 20gb FRAME(2GB). (now we have pgfix(no))
How can we evaluate the benefit of this change?
What can we look in the DB2PM Statistics Long Report? Or in the PDB Tables?
Anything else tha we can do to show the gain of the change ?
Thanks in advance
I recommend checking "before" and "after" figures for average class 2 CPU time, as described in the blog entry at this URL: http://robertsdb2blog.blogspot.com/2019/01/a-case-study-measuring-impact-of-db2.htmlDelete
In addition to the accountig report; from the statistics report, can we extract from the buffer pool part any data to compare the "before" and "after"?
Not really from the buffer pool part of the statistics report. Going with PGFIX(YES) and large page frames is something you do to achieve CPU savings, and most of that will be seen in an accounting report. In a statistics report, you might see somewhat reduced CPU consumption for the Db2 database services address space, owing to PGFIX(YES) making prefetch reads and database writes less costly (in checking on this you may want to normalize for workload activity differences - something you could do by adjusting based on differences in commit count in "before" and "after" statistics reports that both have the same duration with respect to "from" and "to" times).Delete
Thankyou very much Robert.Delete