Db2 9 for z/OS became generally available back in 2007. In the years since, most of the features introduced with Db2 9 have been broadly put to use. An interesting exception is the auto-sizing of buffer pools, implemented via the AUTOSIZE(YES) option of the -ALTER BUFFERPOOL command. Based on my experience, it seems that few Db2 for z/OS sites have gone with buffer pool auto-sizing. I find that a bit perplexing. In the Db2 for Linux/UNIX/Windows user community, a feature called self-tuning memory (which applies to buffer pools and more) appears to be quite popular. Why the seeming reluctance to leverage buffer pool auto-sizing in Db2 for z/OS systems?
I'm thinking that we may be seeing a situation not unlike the one that existed back in the 90s and early 2000s vis-a-vis SMS-management of Db2 for z/OS data sets (SMS is short for System-Managed Storage, a capability provided by the z/OS operating system). Db2 management of table space and index data sets, enabled through a specification of USING STOGROUP (instead of USING VCAT) in a CREATE TABLESPACE or CREATE INDEX statement (or in an ALTER of an existing object), had become commonplace, but many Db2 DBAs and systems programmers were reluctant to take the additional step of letting SMS take care of data set placement within a disk subsystem. Folks were fine with Db2 issuing the Access Method Services commands that were needed to create data sets (what you get with USING STOGROUP), but letting the system handle data set placement was another matter. The thought of having VOLUMES('*') in a CREATE STOGROUP statement, which would hand data set placement responsibility over to SMS, made lots of Db2 systems people uneasy - they just didn't trust z/OS to get that done right; so, folks would create STOGROUPs that were associated with particular disk volumes (e.g., CREATE STOGROUP SGPRD01 VOLUMES(ABC123), CREATE STOGROUP SGPRD02 VOLUMES(XYZ139), etc.), and they would then assign individual Db2 data sets to this STOGROUP or that one - in effect, hand-placing the data sets within a disk subsystem - so as to ensure that the data sets in question went on different disk volumes. "I want to make sure that partition 1 of partitioned table space TS1 doesn't go on the same volume as partition 2 of that table space," they'd say; or, "I want to make sure that index IX1 goes on a volume other than the one holding the table on which IX1 is defined."
This very hands-on approach to Db2 data set placement started to become a major burden on DBAs and systems programmers as Db2 databases mushroomed in size and the number of data sets associated with Db2 databases shot up. Something had to give, and so some Db2 systems people - not without trepidation - decided to let SMS do a job that had become too big to handle via the old hand-placement approach. I had recently started work in the IT department of a Db2 for z/OS-using organization, back in 2000 (between my two IBM stints), when our lead z/OS systems programmer told me that she was going to go with SMS-management of Db2 data sets in production. She just hoped that the performance penalty wouldn't be too severe. Guess what? At our site, and at other sites, SMS management of Db2 data sets worked just fine. Performance didn't suffer, and tons of Db2 administrator time got freed up for use in higher-value activities. Within a short time, SMS management of Db2 data sets became the norm. Key to that development: the old approach (hand-placement of Db2 data sets within disk subsystems) became increasingly untenable, and technology advances made old worries concerning the new approach (SMS management of Db2 data sets) largely moot (z/OS and IBM Z and disk storage improvements made contention at the spinning-disk level much less of a concern than it had been in the past).
Today, a Db2 administration practice that ought to be reexamined is micro-management of buffer pools. Historically, people have sliced and diced buffer pool memory usage in a very fine-grained way. With ongoing growth in the size and number of Db2 subsystems that individual organizations use, and with Db2 for z/OS administration teams being, typically, small in terms of headcount, the time needed to effectively manage Db2 buffer pool sizing is, more and more, just not there. Turning that task over to Db2 (and the z/OS Workload Manager) is an increasingly attractive alternative. At the same time, technology changes have made buffer pool micro-management less worthy of major time commitment: IBM Z memory resources are getting to be huge by recent standards, and as z/OS LPAR real storage sizes soar, super-fine tuning of the use of that space for Db2 buffer pools becomes less and less of an imperative. Not only that, but many a Db2 administrator's mind-set, regarding appropriate buffer pool sizing, has fallen behind the reality of the current hardware situation. What I mean by that: a 10-GB buffer pool configuration (aggregate size of all pools allocated for a subsystem) may seem large in a Db2 administrator's eyes, given historical standards, but in an LPAR with 100+ GB of memory (pretty commonplace these days), that 10 GB looks dinky. It's like z/OS is saying to these people, "Hey, there are a lot of gigabytes in this system that are just lying around, doing nothing. I can see them. Give me and Db2 the keys to the buffer pool sizing car, and we'll put those sleeping gigabytes to work, and application performance will be the better for it. And don't worry about my overloading my own memory resource - I'm not going to do that. I know how to manage what I've got. Give me a chance."
In what I think might have been conceived as an enticement aimed at Db2 people who look askance at system-managed buffer pool sizing, Db2 11 provided "floor" (VPSIZEMIN) and "ceiling" (VPSIZEMAX) options for the -ALTER BUFFERPOOL command: "OK, Db2, I'll turn on auto-sizing for BP10, but I'm not going to let you make the pool smaller than X or larger than Y." I'd say, give that a shot. You can do that for several of your buffer pools (even all of them), or just one. Say you have a z/OS LPAR with lots and lots of memory (when organizations upgrade a mainframe, they often load it with buckets of additional real storage, due in no small part to attractive memory pricing), and a demand paging rate that doesn't vary from zero, even during the busiest processing times of the day (the demand paging rate for an LPAR is available, among other places, in an RMF CPU Summary Report). Say you have in this system a Db2 for z/OS buffer pool (I'll stick with BP10) that is sized at 100,000 buffers. Consider issuing a command of this nature:
-ALTER BUFFERPOOL(BP10) VPSIZE(100000)
VPSIZEMIN(75000) VPSIZEMAX(150000)
...
AUTOSIZE(YES)
Let the workload go for a while, and periodically issue -DISPLAY BUFFERPOOL(BP10) DETAIL to see what's happening with the pool. Even better, have "before" and "after" activity data for the pool, for the same hour of the same day (e.g., 10-11 AM on a Thursday), from either a Db2 monitor-generated statistics report - long or the output of -DISPLAY BUFFERPOOL(BP10) DETAIL (if you go with the command for activity data, and you want data from 10-11 AM, issue the command at 10 AM and again at 11 AM, and you'll have activity data for that 1-hour period in the output of the second issuance of the command). My expectation would be that things will be fine, and some performance metrics might move in a positive direction. And, the system's demand paging rate will likely still be zero. Next, maybe open up auto-sizing for other pools in your system, again with floor and ceiling values with which you're comfortable. And, go back to a pool for which you previously activated auto-sizing, and, if the system-adjusted pool size is larger than its initial size, try another -ALTER BUFFERPOOL with a larger VPSIZE and larger VPSIZEMIN and VPSIZEMAX values, to let the system take the pool size further in the direction that the system has determined is best for overall performance. Over time, you may well become not only comfortable, but happy, with Db2 and z/OS handling sizing for all of a subsystem's buffer pools.
And, just in case you hear a little voice in your head, saying, "Careful, buddy. You spend a pretty good bit of time on buffer pool sizing. If you let the system handle that then you'll be sitting around twiddling your thumbs, and that's not good," tell that voice to shut up. Did you run out of things to do when you let SMS manage Db2 data set placement? Or when you no longer had to manage index page locking because Db2 stopped doing that kind of locking? Or when you could quit managing the package table part of the EDM pool because Db2 quit using EDM pool space for allocating packages to threads? Or when you let Db2 manage secondary disk space allocation for table space and index data sets with MGEXTSZ=YES in ZPARM and Db2's "sliding scale" algorithm? Of course not. In each and every case in which Db2 and/or z/OS took on tasks you used to do (or Db2 stopped doing things that you used to have to work to manage), you ended up staying plenty busy, with a different mix - a higher value mix - of work. System-managed buffer pool sizing offers another opportunity to move your work mix up the value chain. I think the time's right for more people to take that step. Maybe it's your time.
Hi Robert,
ReplyDeleteWhat about group bufferpool tuning? How do you
suggest taking this into account while letting DB2 manage local pool storage?
I would specify ALLOWAUTOALT(YES) for the group buffer pools in the CFRM policy for the Sysplex. That would enable the system to dynamically adjust the size of GBPs in response to a situation such as a system-managed increase in the size of a local buffer pool defined with AUTOSIZE(YES). Make sure that the SIZE value of a GBP in the CFRM policy is larger than the GBP's INITSIZE.
DeleteFor the initial size of a GBP, based on a local pool's VPSIZE, you could go with the approach described in this blog entry: http://robertsdb2blog.blogspot.com/2013/07/db2-for-zos-data-sharing-evolution-of.html.
Robert
Hi Robert, thanks for this Report. Had some testings in the past using AUTOSIZE. So every time Db2 starts to expand the bufferpool a WLM established proc started and executes the ALTER. It was not clear for me to understand how Db2 computes the additional storage for the alter command. After a while the bufferpool gets bigger and bigger because i did not use the VPSIZEMAX Parameter. So i should have a test again using the border's VPSIZEMIN/MAX. The last weeks we resized local buffer from less than 1000.000 pages up to more than 300.000 pages. Helpful was the bufferpool simulation, telling me that there is improvement.
ReplyDeleteWhen Db2 don't use storage that has been added using AUTOSIZE is it released again to the VPSIZEMIN value ?
best regards
Sven
The precise algorithm for dynamic adjustment of a buffer pool's size is not externalized in the Db2 for z/OS documentation. A good description of how Db2 and WLM work together to adjust buffer pool sizes can be found in sections 3.2 and 13.13 of the IBM redbook titled, "Db2 9 for z/OS Technical Overview" (download-able from http://www.redbooks.ibm.com/abstracts/sg247330.html?Open).
DeleteAfter the size of Db2 buffer pool X has been automatically increased via AUTOSIZE(YES), that buffer pool could subsequently be automatically downsized if a) it is determined that another pool should be enlarged to enhance system performance and b) unused system memory is insufficient to accommodate the increase in the size of the other buffer pool and c) its is determined that the size of buffer pool X could be decreased without adversely impacting overall system performance. If a buffer pool's size is automatically decreased via AUTOSIZE(YES), I would not necessarily expect the pool's size to immediately go down to the VPSIZEMIN value. In any case, the size of the pool will not go below VPSIZEMIN.
Robert
Redbook IBM z/OS 2.2 Performance also have an interesting section on how the function is handled now
Delete