Wednesday, August 29, 2018

How Big is Big? (2018 Update - Db2 for z/OS Buffer Pools and DDF Activity)

Almost 5 years ago, I posted to this blog an entry on the question, "How big is big?" in a Db2 for z/OS context. Two areas that I covered in that blog entry are buffer pool configuration size and DDF transaction volume. Quite a lot has changed since October 2013, and it's time for a Db2 "How big is big?" update. In particular, I want to pass on some more-current information regarding buffer pool sizing and DDF activity.

How big is big? (buffer pools)

Back in 2013, when I posted the aforementioned blog entry, the largest buffer pool configuration I'd seen (i.e., the aggregate size of all buffer pools allocated for a single Db2 for z/OS subsystem) was 46 GB. That Db2 subsystem ran in a z/OS LPAR with 180 GB of real storage. Fast-forward to August 2018, and my, how z/OS LPAR memory resources - and exploitation of same via large buffer pools - have grown. The biggest buffer pool configuration for a single Db2 subsystem that I've seen to date? How about 879 GB, in a z/OS LPAR that has 1104 GB (almost 1.1 TB) of central storage. A single pool in that configuration has 66,500,000 buffers of 4 KB each - that's over 253 GB of space in one pool. Does that humongous amount of buffer pool space - over 600 GB of which is page-fixed in memory - put undue pressure on the z/OS LPAR's real storage? No. The demand paging rate for that system (a busy data server, processing about 14,000 SQL statements per second during peak times) is a big fat zero. That's because the 225 GB of memory not used for Db2 buffer pools is plenty for the other real storage requirements in the LPAR.

What does the organization with the great big Db2 buffer pool configuration get in return for using lots and lots of memory for data and index page caching? It gets tremendous suppression of disk subsystem read I/Os: I saw that for an hour during which the volume of data access activity on the system was really high, the highest total read I/O rate for any of the buffer pools was 48 per second (very low). During that peak-busy time, two of the other buffer pools had total read I/O rates of 15 and 13 per second (very, very low), five pools had total read I/O rates between 2 and 5 per second (super-low), and the other nine active pools had total read I/O rates of less than 1 per second, or even 0 (super-duper low). And what are the payoffs from tremendous suppression of disk read I/Os? CPU savings (every I/O - synch or asynch - consumes CPU time) and improved transaction and batch job elapsed times (in Db2 monitor accounting-long reports, wait time related to synchronous and asynchronous database reads - the latter is labeled "wait for other read" - becomes a very small percentage of in-Db2 elapsed time).

z/OS LPAR memory sizes are getting larger all the time. If you've got it, use it - and using it for big (maybe really, really big) buffer pools can be a great move on your part. In doing that, don't forget to leverage fixed-in-memory buffer pools, large page frames, and maybe "pinning" pools (the latter are used to cache associated database objects in memory in their entirety, and should be defined with PGSTEAL(NONE)).


How big is big? (DDF transaction volume)

In the above-cited "How big is big" blog entry, I noted that the highest DDF transaction rate I'd seen for a single Db2 subsystem (average over a 1-hour period) was 786 per second. A few months ago, I saw data from a Db2 subsystem that was processing 3057 DDF transactions per second (again, that's the average over a 1-hour period) - almost 4 times the highest DDF transaction rate I'd seen back in 2013. [It's easy to calculate a DDF transaction rate: in a Db2 monitor accounting-long report with data grouped by connection type, in the section on the DRDA connection type, divide the commit count by the number of seconds in the reporting interval, and there's your transaction rate.]

I have seen that an ever-growing percentage of overall Db2 for z/OS workloads - and a really big percentage of new Db2-accessing application workloads - involve Db2 access via the distributed data facility. This, combined with the increasing processing capacity delivered by new generations of IBM Z servers, plus DDF-related performance features such as high-performance DBATs and the SMT2 mode in which zIIP engines can operate, add up to substantial growth in DDF transaction volumes at many Db2 for z/OS sites (the organization with the DDF transaction rate in excess of 3000 per second for a Db2 subsystem runs the zIIP engines in the associated z/OS LPAR in SMT2 mode). DDF transaction rates are likely to get a further boost as companies take advantage of Db2's built-in REST interface, since that interface provides a second path to the Db2 distributed data facility (the other path being the SQL path which could also be called the DRDA path - more on that in the next entry I'll post to this blog).


Big is likely to get bigger

These upward trends, regarding Db2 buffer pool configuration sizes and DDF transaction volumes, are two that I like to see. The former reflects growing recognition that large IBM Z server real storage resources can be very effectively leveraged to turbocharge Db2 application performance, and the latter shows that Db2 for z/OS is an increasingly popular choice as the data server for modern, multi-tiered, client-server applications that access data through standard interfaces (e.g., JDBC, ODBC, ADO.NET, REST). How big will Db2 buffer pool configurations get in the years to come? How high will DDF transaction rates go? We'll see. My take is that big - even really big - is going to get a lot bigger still.