For several decades I've enjoyed reading the comic strip, B.C., about a bunch of cavemen. In one strip published long ago (I may be assigning quotes to the wrong characters, but you'll get the point), B.C. asks one of his compadres, "Thor, how far can you see?" Thor responds, "Well, I can see that cave over there, and it's about a mile away, so I guess I can see about a mile." B.C. then happens upon Peter, and asks the same question. Peter's reply: "I can see the sun, so the answer to your question is 93 million miles." Thus, B.C. learns something: how far is far? It depends on who you ask.
That truism applies to matters of size as well as distance. How big is big? More specifically, how big is big in a DB2 for z/OS context? What is a big table? What is a big buffer pool configuration? What is a big data sharing group? Ask different people, and you'll get different answers. In this blog entry I'll give you my answers to these and some other questions in the same vein. Most of the information you'll find below is based on my observations of actual, real-world DB2 for z/OS workloads. Some of data was conveyed to me by DB2 users and by IBM colleagues. Keep in mind, then, that what I'm reporting is not necessarily "biggest in the world." It's "biggest that I've seen," or "biggest that I've heard of." I haven't been to every DB2 for z/OS site around the world, nor have I spoken with everyone in the DB2 community, so I imagine that other people could provide numbers that top the ones I've provided in this blog post. Feel free to communicate "I can beat that" information via a comment.
Without further ado, here is my take on "big" as it pertains to DB2 for z/OS.
Buffer pool configuration size. This is an area of much interest to me (it was the focus of part 2 of my 3-part "memory for MIPS" blog entry, posted in the spring of last year). The biggest buffer pool configuration I've seen (combined size of all pools allocated for a single DB2 subsystem) is 46 GB. And get this: every one of the buffer pools comprising this configuration is defined with PGFIX(YES). And get THIS: the demand paging rate for the associated z/OS LPAR is ZERO. How can you use so much page-fixed memory for DB2 buffer pools and still not have any paging? Easy: just have a lot of real storage. The z/OS LPAR on which this DB2 subsystem runs has about 180 GB of memory.
I'll tell you, my definition of "big" as it pertains to a DB2 buffer pool configuration has changed considerably over the past few years. These days, I consider a buffer pool configuration of less than 5 GB for a production DB2 subsystem to be on the small side. Between 5 and 20 GB? I'd call that medium-sized. Big is over 20 GB.
Total disk read I/O rate for one buffer pool. I don't pay much attention to hit ratios when I look at buffer pool activity. I'm much more concerned with a buffer pool's total disk read I/O rate. That rate is the sum of synchronous and asynchronous read I/Os per second for a pool. If you get the numbers from a DB2 monitor, you'll probably see one figure for synchronous read I/Os per second, and three fields showing asynchronous read I/Os per second: sequential prefetch reads, list prefetch reads, and dynamic prefetch reads. Add all these together to get the total rate. You can also get the data from the output of the DB2 command -DISPLAY BUFFERPOOL(ACTIVE) DETAIL. If you go that route, issue the command once, then wait an hour and issue it again. The output of the SECOND issuance of the command will show one hour of activity (verify that by checking the value of the timestamp in the "INCREMENTAL STATISTICS SINCE" message in the command's output). Add up the synchronous reads (random and sequential) and the asynchronous reads (sequential, list, and dynamic), and divide the total by 3600 to get a per-second rate. Whether looking at DB2 monitor-generated information or -DISPLAY BUFFERPOOL output, do NOT make the mistake of using prefetch REQUEST numbers. You want the prefetch READ numbers (if all of the pages included in one prefetch request are already in the buffer pool, there will be no read associated with that request).
The highest total read I/O rate I've seen for one buffer pool is 9000 per second. My preference is to see a rate that's below 1000 per second for each pool. That may or may not be possible at your site, depending on the amount of server memory available for backing buffer pools.
Sort pool size. I wrote about the sort pool (and other DBM1 pools besides the buffer pools) in part 3 of the aforementioned 3-part "memory for MIPS" bog entry. The largest sort pool setting I've seen in a production DB2 environment is 48,876 KB (the default value is 10,000 KB in a DB2 10 system). On this particular subsystem, there was very little activity in the buffer pools dedicated to work file table spaces. That might be due at least in part to lots of sorts getting done in the large in-memory sort work area. Keep in mind that the sort pool value as specified in ZPARM is the maximum sort work area for an individual SQL-related sort. With lots of concurrent sorts and a large SRTPOOL value, you could see quite a bit of DBM1 virtual storage utilized for sort work space. That could be OK if you have a lot of memory on the z/OS LPAR on which the DB2 subsystem is running.
DDF transaction rate. Back in the 1990s, when DDF (the distributed data facility, through which network-attached application servers access DB2 for z/OS data) was still relatively young, it wasn't thought of as a conduit for for high-volume transaction processing. Various technology enhancements -- including block fetch, static SQL support (via DRDA), stored procedures, and dynamic statement caching -- had a positive effect on DDF application throughput, and nowadays lots of organizations have high-volume DB2 for z/OS client-server workloads. The highest DDF transaction rate I've seen for a single DB2 subsystem is 786 per second. That's the average rate over a one-hour period, not a burst of one or two minutes' duration (the figure came from a DB2 monitor accounting long report, with data ordered by connection type -- in the DRDA section of the report, I found the number of commits and divided that by 3600, the number of seconds in the report time period). The highest DDF transaction rate I've seen for a multi-member DB2 data sharing group is 1110 per second -- also an average over a one-hour time period.
DDF share of overall DB2 workload. If you think of an overall DB2 for z/OS workload in pie chart terms, there are various ways to calculate the size of the slice that represents a component of the workload. Indeed, there are different ways to define "workload component." Personally, I like to divvy up an overall DB2 workload by connection type: there's the DRDA piece (the DDF workload), the CICS-DB2 piece, the call attach piece (one type of batch interface to DB2), the TSO piece (another mostly batch interface), etc. If you look at things from this perspective, one way to size the pie slices is to measure the aggregate in-DB2 cost of SQL statement execution for each connection type. This is easily done if you have a DB2 monitor accounting long report, with data in the report ordered by connection type: you just go to a section of the report (e.g., the CICS section), find the average class 2 CPU time (and make sure that you add "specialty engine" CPU time, if any -- this would typically be zIIP engine CPU time -- to "central processor" CPU time) and multiply that by the "number of occurrences" (this will typically be a field in the upper right of the first page of the report section for a connection type, under a heading of "Highlights" or something similar). You can decide whether you want the report to cover a particularly busy one- or two-hour time period for your system, or a 24-hour period.
I've observed over the years that the DRDA slice of the overall DB2 workload pie is getting larger and larger at many sites, and for some subsystems it's already the largest workload component -- bigger than the CICS-DB2 slice, bigger than the batch DB2 slices (call attach and TSO). Recently, a DB2 for z/OS DBA told me that his organization has a production subsystem for which 95% of the work flows through the DDF address space.
Number of rows in one table. Considering just the DB2 for z/OS-using organizations with which I've directly worked, the largest table of which I'm aware has about 18.9 billion rows. I recall hearing of a company that has about 90 billion rows of data in one DB2 table. The theoretical maximum number of rows in one DB2 for z/OS table is currently 1 trillion.
Members in a DB2 data sharing group. The largest DB2 data sharing group of which I'm aware has 22 members. DB2 for z/OS data sharing and Parallel Sysplex technology (the latter being the mainframe cluster configuration that provides the infrastructure on which DB2 data sharing runs) are architected to support up to 32 DB2 subsystems in one single-image system.
That's all for now. I hope that this blog entry has provided you with information of interest. Maybe some ammo that you can use in "Can we do that?" discussions at your site. Maybe validation of plans you've made or actions you've taken for a DB2 environment that you support. Maybe something that gives you a sense of superiority ("You call THAT big?"). In any case, I'll probably return to this topic at some future date, because one thing I know about "big" is that it keeps getting bigger.