For several decades I've enjoyed reading the comic strip, B.C., about a bunch of cavemen. In one strip published long ago (I may be assigning quotes to the wrong characters, but you'll get the point), B.C. asks one of his compadres, "Thor, how far can you see?" Thor responds, "Well, I can see that cave over there, and it's about a mile away, so I guess I can see about a mile." B.C. then happens upon Peter, and asks the same question. Peter's reply: "I can see the sun, so the answer to your question is 93 million miles." Thus, B.C. learns something: how far is far? It depends on who you ask.
That truism applies to matters of size as well as distance. How big is big? More specifically, how big is big in a DB2 for z/OS context? What is a big table? What is a big buffer pool configuration? What is a big data sharing group? Ask different people, and you'll get different answers. In this blog entry I'll give you my answers to these and some other questions in the same vein. Most of the information you'll find below is based on my observations of actual, real-world DB2 for z/OS workloads. Some of data was conveyed to me by DB2 users and by IBM colleagues. Keep in mind, then, that what I'm reporting is not necessarily "biggest in the world." It's "biggest that I've seen," or "biggest that I've heard of." I haven't been to every DB2 for z/OS site around the world, nor have I spoken with everyone in the DB2 community, so I imagine that other people could provide numbers that top the ones I've provided in this blog post. Feel free to communicate "I can beat that" information via a comment.
Without further ado, here is my take on "big" as it pertains to DB2 for z/OS.
Buffer pool configuration size. This is an area of much interest to me (it was the focus of part 2 of my 3-part "memory for MIPS" blog entry, posted in the spring of last year). The biggest buffer pool configuration I've seen (combined size of all pools allocated for a single DB2 subsystem) is 46 GB. And get this: every one of the buffer pools comprising this configuration is defined with PGFIX(YES). And get THIS: the demand paging rate for the associated z/OS LPAR is ZERO. How can you use so much page-fixed memory for DB2 buffer pools and still not have any paging? Easy: just have a lot of real storage. The z/OS LPAR on which this DB2 subsystem runs has about 180 GB of memory.
I'll tell you, my definition of "big" as it pertains to a DB2 buffer pool configuration has changed considerably over the past few years. These days, I consider a buffer pool configuration of less than 5 GB for a production DB2 subsystem to be on the small side. Between 5 and 20 GB? I'd call that medium-sized. Big is over 20 GB.
Total disk read I/O rate for one buffer pool. I don't pay much attention to hit ratios when I look at buffer pool activity. I'm much more concerned with a buffer pool's total disk read I/O rate. That rate is the sum of synchronous and asynchronous read I/Os per second for a pool. If you get the numbers from a DB2 monitor, you'll probably see one figure for synchronous read I/Os per second, and three fields showing asynchronous read I/Os per second: sequential prefetch reads, list prefetch reads, and dynamic prefetch reads. Add all these together to get the total rate. You can also get the data from the output of the DB2 command -DISPLAY BUFFERPOOL(ACTIVE) DETAIL. If you go that route, issue the command once, then wait an hour and issue it again. The output of the SECOND issuance of the command will show one hour of activity (verify that by checking the value of the timestamp in the "INCREMENTAL STATISTICS SINCE" message in the command's output). Add up the synchronous reads (random and sequential) and the asynchronous reads (sequential, list, and dynamic), and divide the total by 3600 to get a per-second rate. Whether looking at DB2 monitor-generated information or -DISPLAY BUFFERPOOL output, do NOT make the mistake of using prefetch REQUEST numbers. You want the prefetch READ numbers (if all of the pages included in one prefetch request are already in the buffer pool, there will be no read associated with that request).
The highest total read I/O rate I've seen for one buffer pool is 9000 per second. My preference is to see a rate that's below 1000 per second for each pool. That may or may not be possible at your site, depending on the amount of server memory available for backing buffer pools.
Sort pool size. I wrote about the sort pool (and other DBM1 pools besides the buffer pools) in part 3 of the aforementioned 3-part "memory for MIPS" bog entry. The largest sort pool setting I've seen in a production DB2 environment is 48,876 KB (the default value is 10,000 KB in a DB2 10 system). On this particular subsystem, there was very little activity in the buffer pools dedicated to work file table spaces. That might be due at least in part to lots of sorts getting done in the large in-memory sort work area. Keep in mind that the sort pool value as specified in ZPARM is the maximum sort work area for an individual SQL-related sort. With lots of concurrent sorts and a large SRTPOOL value, you could see quite a bit of DBM1 virtual storage utilized for sort work space. That could be OK if you have a lot of memory on the z/OS LPAR on which the DB2 subsystem is running.
DDF transaction rate. Back in the 1990s, when DDF (the distributed data facility, through which network-attached application servers access DB2 for z/OS data) was still relatively young, it wasn't thought of as a conduit for for high-volume transaction processing. Various technology enhancements -- including block fetch, static SQL support (via DRDA), stored procedures, and dynamic statement caching -- had a positive effect on DDF application throughput, and nowadays lots of organizations have high-volume DB2 for z/OS client-server workloads. The highest DDF transaction rate I've seen for a single DB2 subsystem is 786 per second. That's the average rate over a one-hour period, not a burst of one or two minutes' duration (the figure came from a DB2 monitor accounting long report, with data ordered by connection type -- in the DRDA section of the report, I found the number of commits and divided that by 3600, the number of seconds in the report time period). The highest DDF transaction rate I've seen for a multi-member DB2 data sharing group is 1110 per second -- also an average over a one-hour time period.
DDF share of overall DB2 workload. If you think of an overall DB2 for z/OS workload in pie chart terms, there are various ways to calculate the size of the slice that represents a component of the workload. Indeed, there are different ways to define "workload component." Personally, I like to divvy up an overall DB2 workload by connection type: there's the DRDA piece (the DDF workload), the CICS-DB2 piece, the call attach piece (one type of batch interface to DB2), the TSO piece (another mostly batch interface), etc. If you look at things from this perspective, one way to size the pie slices is to measure the aggregate in-DB2 cost of SQL statement execution for each connection type. This is easily done if you have a DB2 monitor accounting long report, with data in the report ordered by connection type: you just go to a section of the report (e.g., the CICS section), find the average class 2 CPU time (and make sure that you add "specialty engine" CPU time, if any -- this would typically be zIIP engine CPU time -- to "central processor" CPU time) and multiply that by the "number of occurrences" (this will typically be a field in the upper right of the first page of the report section for a connection type, under a heading of "Highlights" or something similar). You can decide whether you want the report to cover a particularly busy one- or two-hour time period for your system, or a 24-hour period.
I've observed over the years that the DRDA slice of the overall DB2 workload pie is getting larger and larger at many sites, and for some subsystems it's already the largest workload component -- bigger than the CICS-DB2 slice, bigger than the batch DB2 slices (call attach and TSO). Recently, a DB2 for z/OS DBA told me that his organization has a production subsystem for which 95% of the work flows through the DDF address space.
Number of rows in one table. Considering just the DB2 for z/OS-using organizations with which I've directly worked, the largest table of which I'm aware has about 18.9 billion rows. I recall hearing of a company that has about 90 billion rows of data in one DB2 table. The theoretical maximum number of rows in one DB2 for z/OS table is currently 1 trillion.
Members in a DB2 data sharing group. The largest DB2 data sharing group of which I'm aware has 22 members. DB2 for z/OS data sharing and Parallel Sysplex technology (the latter being the mainframe cluster configuration that provides the infrastructure on which DB2 data sharing runs) are architected to support up to 32 DB2 subsystems in one single-image system.
That's all for now. I hope that this blog entry has provided you with information of interest. Maybe some ammo that you can use in "Can we do that?" discussions at your site. Maybe validation of plans you've made or actions you've taken for a DB2 environment that you support. Maybe something that gives you a sense of superiority ("You call THAT big?"). In any case, I'll probably return to this topic at some future date, because one thing I know about "big" is that it keeps getting bigger.
This is the blog of Robert Catterall, an IBM Db2 for z/OS specialist. The opinions expressed herein are the author's, and should not be construed as reflecting official positions of the IBM Corporation.
Friday, October 25, 2013
Wednesday, October 9, 2013
DB2 for z/OS: Something You Might Not Know About Character String Literal Values
Here's something I imagine a lot of you (or programs written by you) do all the time: issue DB2 for z/OS-accessing SQL statements that contain character string literal values. Below is a query that I issued from Data Studio earlier today, targeting a table that I'd created on one of our test DB2 subsystems:
SELECT PRODUCT_ID, UNIT_SALES
FROM PRODUCT_SALES
WHERE PRODUCT_ID = 'A12345';
Now, a question for you: is that character string literal value, 'A12345', fixed-length or varying-length? Considering this from your point of view, you might answer, "Fixed length." After all, it's six characters, right? No more, no less. And if you look at it again a few minutes from now, it'll still be six characters. Guess what? From DB2's point of view, the value 'A12345' in the above statement is a varying-length character string. I'm not revealing any state secrets here -- check the DB2 10 for z/OS SQL Reference and you'll see, in the first sentence of the section on character string constants, the following:
"A character string constant specifies a varying-length character string."
Of course, even a seasoned mainframe DB2 professional is not likely to have committed the entire SQL Reference to memory, and in fact plenty of people are unaware of this little nugget of information. That's generally not a big deal, because it almost never matters. Why does it hardly ever matter? Because fixed-length and varying-length character strings are compatible for purposes of assignment and comparison in a DB2 for z/OS system. Take the previously referenced SELECT statement. The 'A12345' character string constant is, as noted, treated by DB2 as a varying-length value. The PRODUCT_ID column of my PRODUCT_SALES table is defined as CHAR(6) -- a fixed-length character column. Does that matter? No -- the query runs just fine. What if you have a stored procedure with an input parameter defined as CHAR(10), and you call the stored procedure with a 10-character (or shorter) literal value provided for that input parameter? Does that matter? No -- the CALL will work just fine. Same goes for a user-defined function invocation.
Alright, so I said that the fact that DB2 treats a character string constant as a varying-length value almost never matters. "Almost never" is not the same thing as "never." So, when does it matter? I'm aware of one case, which was recently brought to my attention by a DB2 for z/OS DBA who I've known for years. This individual was trying to create a column mask in a DB2 10 for z/OS new-function mode environment. His CREATE MASK statement looked something like this (I'm using table and column names other than the real ones):
CREATE MASK ABC.COL2_MASK
ON ABC.MY_TABLE
FOR COLUMN COL2 RETURN
CASE
WHEN (1 = 1)
THEN '999999'
ELSE '000000'
END
ENABLE;
When he issued that SQL statement, it failed with a -20474 SQL code, reason code 33, indicating that "The data type of the return expression is not the same as the data type of the column on which the column mask is defined." This didn't make sense to my DBA friend, because COL2 was defined as CHAR(6), and he'd provided 6-character literal values in the RETURN clause of the CREATE MASK statement. In this situation, the problem was related to the fact that the character string literals in the SQL statement were treated as varying-length values by DB2. Why did that matter? Because, per the DB2 10 SQL Reference, one of the rules of column mask creation is as follows:
"The result data type, null attribute, data length, subtype, encoding scheme, and CCSID of the CASE expression must be identical to those attributes of the column that is specified by column-name."
Fixed-length character strings and varying-length character strings are two different DB2 data types. Thus, the -20474 error, with reason code 33. When the DBA changed the statement as shown below, it worked:
CREATE MASK ABC.COL2_MASK
ON ABC.MY_TABLE
FOR COLUMN COL2 RETURN
CASE
WHEN (1 = 1)
THEN CAST('999999' AS CHAR(6))
ELSE CAST('000000' AS CHAR(6))
END
ENABLE;
The statement also worked when coded as follows, because the CHAR scalar function returns a fixed-length character string representation of the argument (this from the DB2 10 SQL Reference, with emphasis added by me):
CREATE MASK ABC.COL2_MASK
ON ABC.MY_TABLE
FOR COLUMN COL2 RETURN
CASE
WHEN (1 = 1)
THEN CHAR('999999')
ELSE CHAR('000000')
END
ENABLE;
Recap time. 1) Character string literals in SQL statements are treated by DB2 for z/OS as varying-length values. 2) That almost never matters. 3) It matters when you want to create a mask for a fixed-length character column.
SELECT PRODUCT_ID, UNIT_SALES
FROM PRODUCT_SALES
WHERE PRODUCT_ID = 'A12345';
Now, a question for you: is that character string literal value, 'A12345', fixed-length or varying-length? Considering this from your point of view, you might answer, "Fixed length." After all, it's six characters, right? No more, no less. And if you look at it again a few minutes from now, it'll still be six characters. Guess what? From DB2's point of view, the value 'A12345' in the above statement is a varying-length character string. I'm not revealing any state secrets here -- check the DB2 10 for z/OS SQL Reference and you'll see, in the first sentence of the section on character string constants, the following:
"A character string constant specifies a varying-length character string."
Of course, even a seasoned mainframe DB2 professional is not likely to have committed the entire SQL Reference to memory, and in fact plenty of people are unaware of this little nugget of information. That's generally not a big deal, because it almost never matters. Why does it hardly ever matter? Because fixed-length and varying-length character strings are compatible for purposes of assignment and comparison in a DB2 for z/OS system. Take the previously referenced SELECT statement. The 'A12345' character string constant is, as noted, treated by DB2 as a varying-length value. The PRODUCT_ID column of my PRODUCT_SALES table is defined as CHAR(6) -- a fixed-length character column. Does that matter? No -- the query runs just fine. What if you have a stored procedure with an input parameter defined as CHAR(10), and you call the stored procedure with a 10-character (or shorter) literal value provided for that input parameter? Does that matter? No -- the CALL will work just fine. Same goes for a user-defined function invocation.
Alright, so I said that the fact that DB2 treats a character string constant as a varying-length value almost never matters. "Almost never" is not the same thing as "never." So, when does it matter? I'm aware of one case, which was recently brought to my attention by a DB2 for z/OS DBA who I've known for years. This individual was trying to create a column mask in a DB2 10 for z/OS new-function mode environment. His CREATE MASK statement looked something like this (I'm using table and column names other than the real ones):
CREATE MASK ABC.COL2_MASK
ON ABC.MY_TABLE
FOR COLUMN COL2 RETURN
CASE
WHEN (1 = 1)
THEN '999999'
ELSE '000000'
END
ENABLE;
When he issued that SQL statement, it failed with a -20474 SQL code, reason code 33, indicating that "The data type of the return expression is not the same as the data type of the column on which the column mask is defined." This didn't make sense to my DBA friend, because COL2 was defined as CHAR(6), and he'd provided 6-character literal values in the RETURN clause of the CREATE MASK statement. In this situation, the problem was related to the fact that the character string literals in the SQL statement were treated as varying-length values by DB2. Why did that matter? Because, per the DB2 10 SQL Reference, one of the rules of column mask creation is as follows:
"The result data type, null attribute, data length, subtype, encoding scheme, and CCSID of the CASE expression must be identical to those attributes of the column that is specified by column-name."
Fixed-length character strings and varying-length character strings are two different DB2 data types. Thus, the -20474 error, with reason code 33. When the DBA changed the statement as shown below, it worked:
CREATE MASK ABC.COL2_MASK
ON ABC.MY_TABLE
FOR COLUMN COL2 RETURN
CASE
WHEN (1 = 1)
THEN CAST('999999' AS CHAR(6))
ELSE CAST('000000' AS CHAR(6))
END
ENABLE;
The statement also worked when coded as follows, because the CHAR scalar function returns a fixed-length character string representation of the argument (this from the DB2 10 SQL Reference, with emphasis added by me):
CREATE MASK ABC.COL2_MASK
ON ABC.MY_TABLE
FOR COLUMN COL2 RETURN
CASE
WHEN (1 = 1)
THEN CHAR('999999')
ELSE CHAR('000000')
END
ENABLE;
Recap time. 1) Character string literals in SQL statements are treated by DB2 for z/OS as varying-length values. 2) That almost never matters. 3) It matters when you want to create a mask for a fixed-length character column.