Monday, January 26, 2015

A Java Program, a COBOL Program, and a DB2 Update - An Interesting Case Study

Not long ago, an IBM colleague of mine who is a WebSphere Application Server (WAS) for z/OS specialist sent to me and to a few other IBMers (including some of our CICS and DB2 for z/OS developers) some information about an application scenario, and asked for our ideas and feedback. The scenario involved a Java program that accessed a local DB2 for z/OS subsystem (i.e., a subsystem in the same z/OS LPAR as the WAS instance) and selected a row to be updated, and then invoked a COBOL program (also in the same z/OS LPAR) that performed the actual row update operation. The IBM WAS for z/OS specialist had some particular questions about DB2 locking in this situation, and was looking for an approach that would be favorable from a concurrency perspective. Everyone chimed in with their thoughts on the matter, and in this blog entry I'll synthesize and summarize that information. I think it makes for an interesting case study.

The client with which my WAS for z/OS specialist colleague (named Edward) was working had an application requirement whereby, as I mentioned, a Java program was to select a DB2 row for update and then was to invoke a COBOL program which would perform the prescribed update operation. The COBOL program existed already in the form of a CICS transaction, and as CICS has advanced capabilities regarding interaction with Java applications, Edward wanted to assess the options available for invoking the COBOL program in its present, CICS transactional form. Edward's chief concerns were locking and data integrity (locking, of course, is a mechanism for preserving data integrity). Edward was thinking that he wanted the Java program to acquire and retain a lock on the DB2 row to be updated, so as to prevent some other program from altering the record between its selection by the Java program and its subsequent update via the Java-invoked CICS transaction; he was concerned, however, that the Java-acquired lock on the row would end up blocking the update action of the called CICS transaction program.

The impact of the concurrency issue raised by Edward hinged on whether or not the selecting Java program and the updating CICS program would be seen by DB2 for z/OS as being associated with the same "global" transaction: if "no" then a lock (row- or page-level, depending on the target table space's LOCKSIZE attribute) acquired by the Java program would indeed block the update attempt by the invoked CICS transaction; if "yes" then a Java-acquired lock would be owned by the global transaction and would therefore be inherited by the CICS program. In that latter case, then, even an X-lock acquired on the row (or the row's page) by the Java program would not impede the CICS program's subsequent update operation. [Sometimes, the choice is indeed to X-lock a selected row, to prevent it form even being read by another program prior to its "downstream" modification by a called process -- such a "hands-off" lock is sometimes acquired by way of an update statement that sets the value of a column in a row equal to its current value, thereby X-locking the row without actually changing its contents.]

To this question our CICS and DB2 for z/OS developers responded with some enlightening information. First, if the row-selecting Java program were to run in WAS for z/OS and then link into CICS to invoke the row-updating program, the two processes (row-select and row-update) would be seen by DB2 as two different units of work, and the result would be a lock conflict. This conflict could be avoided by having the Java program execute in the CICS Liberty environment (the Liberty Profile is a lightweight Java application development and runtime environment, one form of which is available within CICS); however, CICS Liberty itself is only one part of the solution -- there is also the matter of the JDBC driver used to access DB2 for z/OS from the Java program. If the type 4 JDBC driver is utilized, the global nature of the overall transaction will not be recognized by DB2, and the lock conflict challenge will remain. If the type 2 JDBC driver is used (and that is an option when the target DB2 subsystem is in the same z/OS LPAR as the CICS region), and if the link to the COBOL program is local (i.e., not a distributed program link to a program in another CICS region), access to DB2 from the Java program running in the CICS Liberty environment and from the COBOL program will both be associated with the same DB2 thread; therefore, any lock acquired on DB2 data by way of the row-selecting Java program will belong as well to the row-updating COBOL program.

So, a CICS-based solution satisfying the application requirement was definitely available to the organization with which Andrew was working. As an alternative approach -- a plan B, if you will -- I suggested to Andrew the following: have the Java program invoke the row-updating COBOL program as a DB2 stored procedure. In that case, the same DB2 thread should be used for the Java-based SELECT and the COBOL-based UPDATE. Going the stored procedure route could in some cases be quite straightforward: generally speaking, if a COBOL program that executes in a CICS region issues SQL statements but no CICS commands, it can be invoked in the form of a stored procedure with little, if any, modification.

Yet another approach would not depend on the Java and COBOL programs sharing locks. Suppose that the Java program did NOT acquire and retain a lock on the target DB2 row prior to invoking the updating COBOL program (assuming that the typical cursor stability isolation level is in effect, an S lock acquired on a target row will be released following completion of a SELECT statement). If a lock on the target row is not acquired and retained by the Java program, how can one avoid the situation in which some other program comes in and changes the row before the Java-invoked COBOL program gets to it, potentially leading to an undesired "double update" of the target row? The answer, in a nutshell, is that you don't avoid that situation. Instead, you expect that it won't happen, and you detect it if it does. This is what so-called optimistic locking is about.

Optimistic locking was introduced with DB2 9 for z/OS, and there is a good description of this feature in section 7.3 of the IBM "redbook" titled DB2 9 for z/OS Technical Overview. Basically, it works like this: you alter the target table to add what is called a row change timestamp. That timestamp value is maintained by DB2 and it does what you'd expect: it records the time at which a row is changed. With that done (and continuing with the application requirement of which I've written), the Java program would select a row (without retaining a lock on the data) and then invoke the COBOL program to update the row. In doing that, the Java program would pass to the COBOL program the value of the row's last-changed timestamp. The COBOL program would then issue an UPDATE that would search for the target row using the appropriate qualifying criteria (e.g., an account number) AND the value of the row-change timestamp retrieved by the Java program's SELECT. If the target row has already been changed by an intervening statement (i.e., a data-changing SQL statement that gets to the row between the Java SELECT and the COBOL UPDATE), the row-change timestamp value will have changed (again, DB2 maintains that value), and the COBOL UPDATE won't find the target row -- that's how you avoid the aforementioned "double-update" scenario. If you want, you could have the COBOL program in that situation select the target row using its unique key but NOT with a predicate referencing the row-change timestamp -- that would show you that the row is there (assuming the intervening data-change statement wasn't a DELETE), but that it was updated between the Java SELECT and the COBOL UPDATE. The extra SELECT to check on this occurrence might strike you as being a little expensive, but keep in mind that it's something your program will rarely have to do -- the term "optimistic locking" suggests an expectation that the target row will NOT be changed between the Java SELECT and the COBOL UPDATE. Something that's a little expensive but seldom done is usually not a big deal.

There you have it: at least three ways to address the stated application requirement:
  • Use the CICS Liberty environment for the Java program, and the type 2 JDBC driver and a local link to the COBOL program in the form of a CICS transaction, or
  • Invoke the COBOL program as a DB2 stored procedure, or
  • Use the optimistic locking approach.

The first two of these options involve locking the target row via the selecting Java program (as mentioned, that could be done by updating the row to set a column equal to itself), and then invoking the updating COBOL program which will not be blocked by the Java-acquired lock because it will share ownership of that lock. The third approach does not depend on such shared lock ownership.

Faced with a similar application requirement, the approach you should take is the one that best suits your needs and your environment. Always nice to have options, I say.

Monday, December 22, 2014

DB2 for z/OS: Time to go Bold and Big with System z Memory

With 2015 right around the corner, many people are probably starting to ponder their new year's resolutions -- the things they aim to do with a fresh set of twelve months. To the usual line-up of action items ("I will get in shape," "I will read more," "I will eat more fruits and vegetables," etc.), I would suggest that you add the following: "I will be bold in using memory for the production DB2 for z/OS systems I support." If that doesn't initially grab you like the idea of learning to play a musical instrument (though that could be a good resolution), give me a chance to explain where I'm coming from.

First, I need to give credit to Martin Packer, my UK-based colleague and one of IBM's leading System z experts, for the "be bold" directive. Martin is one of a growing number of people in the worldwide DB2 for z/OS community who have seen that, with the possible exception of WebSphere Application Server for z/OS, no subsystem responds positively to increased memory resources as does DB2: the more memory you provide to a DB2 for z/OS subsystem, the better your DB2-accessing applications will perform, and the more CPU-efficient your DB2 workload will be. Martin likes to tell his clients to "be bold" in boosting the real storage resources with which a DB2 subsystem can work because he knows (and he's right) that incremental increases aren't the way to move the needle significantly with regard to DB2 performance and CPU efficiency. Big impact comes from big moves. Don't be thinking 5% or 10% more memory. Think 100% or more.

Right here, some folks might be thinking, "Maybe Catterall gets a commission on IBM sales of mainframe memory." Not so. I'm paid to help organizations succeed with DB2 for z/OS, and Big Memory is a big part of that. In multiple ways over the past decade or so, IBM has plowed the field, removing barriers that formerly would have gotten in the way of going big with respect to provisioning mainframe memory:
  • Software -- About 10 years ago, z/OS V1.2 delivered 64-bit storage addressing, allowing for 16 exabytes (that's 16 million terabytes) of virtual storage addressability (versus the 2 gigabytes we had with the old 31-bit addressing scheme). DB2 Version 8 exploited that technology by moving most DB2 storage pools above the 2 GB "bar" in the DB2 database services address space (aka DBM1).
  • Hardware -- The old zSeries line of servers (z990, z900, z800, and z890) enabled byte-addressable use of more than 2 GB of real storage back in the early 2000s. Today, an EC12 mainframe can be configured with up to 3 terabytes of real storage.
  • Economics -- System z memory costs much less now than it did just a few years ago.
Of course, being able to provision a large amount of System z memory is of value to the extent that you can exploit that big real storage resource. DB2 has provided all kinds of help here, as pointed out below.

Buffer pools

The poster child of Big Memory exploiters, DB2 buffer pools have been above the 2 GB bar in DBM1 since Version 8. Growing your DB2 buffer pool configuration can boost performance in two ways:
  • Response time and throughput -- Take a look at an accounting long report generated by your DB2 monitor. Chances are, the largest components of in-DB2 elapsed time for your applications are read I/O-related: wait for synchronous read (on-demand reads of individual table space or index pages) and wait for "other" read (this has to do with prefetch reads: if a program requests a page that is currently scheduled for read-in via dynamic, list, or sequential prefetch, the program will wait for the prefetch read to complete); thus, these wait-for-read occurrences tend to be your primary leverage point for improving response time and throughput for DB2-accessing applications. The bigger your buffer pools, the lower the incidence of wait-for-read situations.
  • CPU efficiency -- Some System z people are under the mistaken impression that a mainframe's I/O assist processors take on all of the CPU load associated with I/O operations. Not true. The I/O assist processors certainly help, and they are one reason for System z's preeminence when it comes to handling I/O-heavy workloads, but make no mistake: every I/O operation consumes cycles of a general-purpose mainframe engine (or zIIP engine cycles, if we're talking about a zIIP-eligible workload). Synchronous read I/Os eliminated via bigger buffer pools lead to reduced in-DB2 CPU time (also known as "class 2" CPU time) for DB2-accessing applications, and avoided prefetch reads and database writes reduce the CPU time consumed by the DB2 database services address space (with DB2 10, prefetch reads and database writes became 100% zIIP-eligible, so savings here can help to avoid zIIP contention issues).
DB2 11 for z/OS anticipated configuration of extra-large buffer pools by providing support for backing page-fixed buffer pools (i.e., those defined with the PGFIX(YES) option) with 2 GB real storage page frames (larger real storage page frames -- 4 KB was once the only frame size available -- improve CPU efficiency by reducing the cost of virtual to real storage address translation). Now, you might think that 2 GB page frames would be way better for address translation efficiency than 1 MB page frames (another page frame size choice, first supported for page-fixed buffer pools with DB2 10), and they do offer an advantage here, but you're not likely to see a significant CPU efficiency gain versus 1 MB page frames unless the buffer pool in question is REALLY big. How big is really big, in this context? I don't know that we have a single hard and fast number, but a DBA at one site running DB2 11 for z/OS told me that he started to see a nice boost in CPU efficiency when he got a page-fixed buffer pool allocated across ten 2 GB page frames. That's right: a single buffer pool sized at 20 GB.

Does the thought of a single 20 GB buffer pool seem really "out there" to you? It might, given that most DB2 shops I've seen have entire buffer pool configurations (by which I mean the aggregate size of all buffer pools associated with one DB2 subsystem) that are well under 20 GB in size. Here's my message for you: it's time to start wrapping your brain around the idea of really big buffer pool configurations, and getting the ball rolling in that direction at your site. Let me give you a data point for your consideration: at an organization with which I've worked for a number of years (a large supplier to many firms in the manufacturing industry), the size of the buffer pool configuration for the production DB2 subsystem is 90 GB. And get this: every one of the buffer pools in this configuration is defined with PGFIX(YES). Are these people crazy? Are they thrashing horribly? No. Their demand paging rate (a z/OS monitor-reported metric, and my preferred indicator of pressure on an LPAR's memory resource) is ZERO. It's zero because the LPAR in question is configured with 212 GB of real storage -- there's plenty of memory for use beyond DB2 page caching. Oh, and this company also runs WebSphere Application Server, another Big Memory exploiter, in that same z/OS LPAR. It's a client-server application system in a box (the WebSphere apps access DB2 for z/OS data), and performance-wise, it screams.

I think that it won't be long before we see DB2 buffer pool configurations north of 100 GB.

Thread storage

CICS-DB2 people, in particular, have long known of the CPU efficiency gains to be had when persistent threads (those being threads that persist through commits, such as CICS-DB2 protected entry threads) are paired with DB2 packages bound with RELEASE(DEALLOCATE) (organizations using IMS Transaction Manager with DB2 get the same benefit with WFI and pseudo-WFI regions, and DB2 10 extended this performance tuning possibility to DDF-using applications via high-performance DBATs). The thing is, RELEASE(DEALLOCATE) packages executed via persistent threads makes for "fatter" threads, from a virtual and real storage standpoint (fellow IBMer John Campbell's term -- I like it). That required, in the past, very selective use of RELEASE(DEALLOCATE), not only because virtual and real storage were at a premium, but because the fatter threads consumed space in the part of the DB2 EDM pool used for packages allocated to threads (if you ran out space in that section of the EDM pool, you'd have programs failing). Starting with DB2 10 for z/OS, that situation changed: for packages bound (or rebound) in a DB2 10 system, almost all of the virtual storage used for copies of the packages when they were allocated to threads went above the 2 GB bar in DBM1; furthermore, those packages were copied to agent local pool storage when allocated to threads, as opposed to taking up EDM pool space. The upshot? DB2 10 provided much more virtual storage "head room" (John Campbell, again) for the pairing of RELEASE(DEALLOCATE) packages and persistent threads. Being able to use more virtual storage to gain greater CPU efficiency is nice, but only if the extra virtual storage utilization is backed by a sufficient real storage resource. Getting a bigger bang from RELEASE(DEALLOCATE) packages and persistent threads is, then, another reason to go big with memory when it comes to DB2.

Other ways to exploit Big Memory for better DB2 performance

How about a bigger dynamic statement cache? Especially for quick-running dynamic SQL statements (such as those that tend to dominate for many high-volume DB2 DDF-using applications), the CPU cost of statement preparation can be several times the cost of statement execution. The bigger the DB2 dynamic statement cache (set via a ZPARM parameter), the bigger your cache hit ratio is likely to be (the percentage of statement "finds" for cache look-ups), and that will result in CPU savings. I regularly see, these days, dynamic statement cache hit ratios (as reported by a DB2 monitor) that exceed 90%. If your hit ratio is less than that, consider enlarging your dynamic statement cache. You have all the virtual storage you'd want for that (the space has been above the 2 GB bar in DBM1 since DB2 Version 8) -- you just need to back that virtual storage with real storage.

Often getting less attention than the dynamic statement cache, the DB2 sort pool presents another opportunity to use more memory to get more done with less CPU. The sort pool is the in-memory work space available to DB2 for processing a SQL-related (as opposed to utility-related) sort operation (such as one that might be required to satisfy an ORDER BY, GROUP BY, or DISTINCT specification). The more of the processing for a SQL sort that can be accomplished in the sort pool (versus using space in the DB2 work file database), the better, in terms of CPU efficiency. The default size of the sort pool (as determined by the value of the SRTPOOL parameter in ZPARM) is 10 MB (it was 2 MB before DB2 10). The largest sort pool size I've seen at a DB2 site is 48 MB. The max size is 128 MB, but don't go crazy here -- keep in mind that this value is the maximum in-memory sort work area that DB2 can use per concurrent SQL sort; so, if you had a 40 MB sort pool and 10 big SQL sort operations were going at one time, you could see 400 MB (10 X 40 MB) of virtual storage used for sort work space. Of course, lots of virtual storage for SQL sort processing is plenty do-able if (same song, nth verse) you back that virtual storage with a large real storage resource.

And don't forget the RID pool, used by DB2 for sorting and otherwise processing lists of RIDs (row IDs, obtained from indexes) that are used for certain data access paths (such as list prefetch, and index ANDing and ORing). DB2 10 raised the default size of this pool (see your MAXRBLK setting in ZPARM) to 400 MB from the former default of 8 MB. DB2 10 also changed things so that running out of space in the RID pool for a RID list processing operation would result in that processing being continued using work file database space (versus the previous out-of-RID-pool-space action, which was to abandon RID list processing in favor of a table space scan). That was a good change, but if you see some work file spill-over for RID list processing (your DB2 monitor can show you this), you might consider making your RID pool larger -- completing a RID list processing operation using a combination of RID pool and work file space is probably better than going to a table space scan, but getting all the RID list work done in the RID pool should deliver optimal CPU efficiency.

Closing thoughts

From the start, DB2 for z/OS was architected to make good use of large server memory resources (it was one of the first exploiters of the old MVS/XA operating system, which first gave us access to 2 GB -- versus 16 MB -- of byte-addressable space on a mainframe server), and that focus continues. If you want to take the performance and CPU efficiency of your DB2 for z/OS system to the next level, go big on memory. Are you looking to have a z/OS environment in which CPU capacity is well balanced by real storage? I'd think in terms of 20 GB or more of memory for each engine (zIIP as well as general-purpose) in an LPAR.

Keep in mind that even a large real storage resource can be over-committed. As previously noted, I like to use the demand paging rate for an LPAR as the indicator of pressure on the LPAR's real storage. In expanding DB2's use of memory (larger buffer pools, more thread storage, etc.), I would aim to keep the demand paging rate from getting out of hand. My take? If that rate is in the low single digits per second, on average, during busy processing times, it's not out of hand. Mid-single digits per second is what I see as "yellow light" territory. If I saw a demand paging rate in the higher single digits per second (or more), I'd want to either make more memory available to the LPAR (my preference) or reduce virtual storage utilization. Of course, a zero demand paging rate is great, but don't let a very small, non-zero rate get you overly bothered.

More and more organizations are seeing first hand what Big Memory can do for a DB2 subsystem and the applications that access that subsystem, and they like what they see. So, in 2015, if you haven't already done so, do what my friend Martin recommends: on the DB2 memory front, go bold and go big.

Monday, November 24, 2014

DB2 for z/OS: How Low Can Synchronous Read Wait Time Go?

Short answer: lower than I'd previously thought.

Short explanation: solid-state drives.

OK, backstory: I was recently analyzing some performance information related to a client's production DB2 for z/OS environment. Included in the documentation provided by the client were some Accounting Long reports generated by their DB2 monitor (these reports format data contained in DB2 accounting trace records). As I typically do when reviewing such reports, I checked out the wait time per synchronous read I/O operation: I located the "Class 3 wait time" heading in the report (data therein comes from DB2 accounting trace class 3), found the line for "Database I/O" under "Sync I/O," noted the "Average time" (the average time, per accounting record, spent waiting on single-page, on-demand read I/Os to complete), and divided that figure by the "Average event" number in the same line (that's the average number, in this case, of synchronous read I/O operations per accounting record). The quotient thus obtained gives you wait time per synchronous read. On seeing this number, I did a double take.

Back in the mid-1990s, at an IBM storage systems conference, I delivered a presentation that provided a DB2 for z/OS perspective on I/O performance. In that presentation, I mentioned that getting average wait time per synchronous read below 30 milliseconds was a good thing, and that getting this time to 20 milliseconds was a great thing. Not long after that, large cache memory sizes became the norm for enterprise storage subsystems, and synchronous read wait times dropped sharply at sites where this technology was put to use (I recall being surprised when I first saw, in the late 1990s, an average wait time per synchronous read number that was below 5 milliseconds). As time passed, more performance-boosting technologies appeared on the mainframe storage systems scene, including faster channels, parallel access volumes (a z/OS feature that enabled multiple I/O operations targeting a single disk volume to execute concurrently), and increasingly sophisticated algorithms for managing disk controller cache memory resources. By around 2010, I was seeing average wait time per DB2 for z/OS synchronous read go as low as 1 millisecond.

The wait time per synchronous read that I saw in the data I reviewed, as previously mentioned, last week? 0.25 milliseconds. That's 250 microseconds. "Your DB2 synchronous read I/Os are screaming. How do you get numbers like that?" I asked one of the client's DB2 for z/OS DBAs. "Oh," he said, "that would be the solid-state drives."

So there you have it. I certainly knew that solid-state drives (SSDs) were on the market (IBM is one of the vendors of SSDs), but I hadn't yet seen the effect of a wide-scale deployment of the technology at a DB2 for z/OS site (in some cases SSDs are used in a niche fashion for a relatively small percentage of an organization's data assets). Since SSDs don't have the actuators ("arms") associated with traditional spinning disk drives, so-called seek-time delay (the time it takes to move a read/write "head" to a particular track on a disk) is largely eliminated, and that contributes to substantially improved read I/O performance, especially for transactional workloads characterized by random reads.

It's all part of a big picture that continues to change (and for the better). Making synchronous reads go as quickly as possible is still, of course, an important aspect of optimizing DB2 for z/OS I/O performance. In that regard, utilization of SSDs is becoming a more attractive option as the technology becomes more cost-competitive relative to spinning-disk storage systems. The other key to driving down I/O wait times is to eliminate them altogether via larger DB2 buffer pools. And folks, when I say larger, I mean MUCH larger than a lot of you have been thinking. It's time to go big -- really big -- in the buffer pool department. I'll have more to say on that topic in an upcoming blog entry, so stay tuned.

Wednesday, October 29, 2014

DB2 for z/OS: Thoughts on Table Space and Index Page Size Selection

The other day I got a note, from a DB2 for z/OS DBA, in which I was asked for my thoughts regarding the selection of page size for a table space. I typed out what I hoped would be a useful response, and found myself thinking, as I sometimes do, "Hey -- I could turn this into a blog entry." And so I have. I'll start here by offering some guidelines on page size selection for table spaces, and then I'll shift to the index perspective on the matter.

Table spaces

A table space (or index) gets its page size from the buffer pool to which it is assigned; so, a table space assigned to a buffer pool with buffers of 8K bytes in size will have pages of 8K bytes in size.

First, and obviously, the row length of a table is an important factor in selecting the page size for the associated table space. Row length determines the minimum page size that you can use for a table space. If the rows of a table are 7000 bytes in length, the pages of the associated table space will have to be at least 8K bytes in size, because a table row cannot span pages of a table space (an exception to this rule: a long LOB or XML value would span multiple pages in a LOB table space or an XML table space).

Once you've determined the minimum possible size for the pages of a table space, you can think about whether or not a page size larger than the minimum would be a good choice. Why might you want to go with a larger-than-minimum page size? One reason would be optimization of disk and server memory space utilization. Suppose that the row length for a table is 5000 bytes. The minimum page size for the associated table space would be 8K bytes; however, as rows can't span pages, that page size would waste quite a bit of space on disk and in memory (in each page you'd have 5K bytes occupied by a table row, and the remaining 3K bytes of space would be empty). In this case, a 16K page size would boost space utilization considerably: 15K bytes in each page could be occupied by 3 rows, leaving only 1K bytes of unused space). If you're wondering whether data compression would change this calculus, the answer is probably not -- that is to say, the right page size for an uncompressed table space is probably the right page size to use if you want to compress the table space; so, base page size selection on the uncompressed length of a table's rows.

Here's another reason to consider a page size that is larger than the minimum possible size for a table space (which, again, is determined by uncompressed row length): data access patterns. If data in a table space is typically accessed sequentially (i.e., if sequential scans are the rule for a table space, versus random single-page access), a larger page size could mean fewer GETPAGEs for those sequential scans, and that could boost CPU efficiency for applications that drive the sequential scans. Note that "sequential scan" doesn't necessarily mean a table space scan. It could be a matching index scan that would drive dynamic prefetch.

Just as access patterns for data in a table could lead you to select a larger-than-minimum page size for a table space, so they could push you in the other direction. When access to data in a table space is dominated by random, single-page reads, going with the minimum possible page size is probably the right move, so as to maximize buffer pool effectiveness -- in particular when SELECT statements targeting a table tend to have single-row result sets, you don't want to use an overly-large page size, because larger pages would bring into memory many rows that are not needed by a query, in addition to the one row that is needed (exception: if singleton SELECTs are executed via a "do loop" in a batch program, each time issued using a search value obtained from a file that is ordered in such a way that access to the target table is in fact sequential, dynamic prefetch would be expected to kick in and a larger page size could therefore be an efficiency-booster).

One more consideration: if you want to utilize LOB in-lining for a table (a DB2 10 for z/OS new-function mode feature), that could affect your table space page-size decision. Suppose a table has a LOB column, and the length of the table's rows without the LOB values would make 4K bytes a good page size for the associated base table space. If you determined that a high percentage of the table's LOB values could be completely in-lined with (for example), a 16K page, and if that inlining would deliver performance benefits for you, you might go with that 16K page instead of the minimum 4K page. You can read more about LOB inlining in an entry that I posted to this blog a couple of years ago.


Indexes

Prior to DB2 9 for z/OS, there was one choice for the page size of an index: 4K bytes. Starting with DB2 9 in new-function mode, the page size for an index could be 4K, 8K, 16K, or 32K bytes. Why might you go with a larger-than-4K page size for an index on a DB2 table? One reason would be to enable index compression -- something that requires the use of a greater-than-4K index page size (as noted in an entry I posted a few years ago to the blog that I maintained while working as an independent DB2 consultant).

Compression isn't the only reason to consider a larger-that-4K page size for an index. Another motivator could be a desire to reduce index page split activity. If an index is defined on a key whose values are not continuously-ascending (i.e., if key values for rows newly inserted into the underlying table are likely to be less than the key's current maximum value), throughput for high-volume insert operations could be constrained by index page split activity. [Unlike rows in a table, entries in an index MUST be physically ordered by key value, and that leads to index page splits when new key values are inserted into the "middle" of an index, as will be the case for an index defined on a non-continuously-ascending key -- if the page into which a new index entry must go is full, the page will be split, and a portion of the page's entries will be moved to a previously empty page in the index.] Index page splits can have a particularly significant impact on insert throughput in a DB2 data sharing environment, owing to an attendant increase in log write activity. When index pages are larger, index page split activity tends to decrease; so, if a primary concern is optimized throughput for high-volume insert operations, an index page size of 16K or 32K bytes could be preferred over the traditional 4K size.

Larger index page sizes can also improve the performance of index scans by reducing associated GETPAGE activity. On top of that, larger page sizes could reduce the number of levels for an index (referring to the root page level on top (logically speaking), the leaf page level on the bottom, and one or more non-leaf levels in-between in the B-tree index structure), and that would in turn result in reduced GETPAGE activity for index probes.  

On the other hand, if compression is not desired for an index, and if the primary performance concern is optimization of data retrieval involving index access, and if access to entries in an index tends to be random (versus sequential) in nature, a 4K page size is probably your best choice.

A note on changing an object's page size

If you implement a table space or an index with a certain page size and you later determine that a different page size would deliver better performance, you can make that change by way of ALTER TABLESPACE or ALTER INDEX, with a buffer pool specification that would reassign the object to a buffer pool with larger- or smaller-sized buffers, as desired. Starting with DB2 10 for z/OS running in new-function mode, a change to an object's page size is a pending DDL operation that can be non-disruptively accomplished with an ALTER (as mentioned above) followed by an online REORG of the target object. Note, however, that page-size changing via pending DDL is only possible if the table space being altered (or the underlying table space on which a to-be-altered index is defined) is of the universal variety -- one of several incentives to get to universal table spaces (prior to DB2 10, a table space could not be altered to have a different page size, and altering an index to change the page size would place the index in rebuild-pending status).

In conclusion...

Put some thought into your table space and index page-size decisions, and know that with universal table spaces (and indexes defined thereon), you can pretty easily change your mind down the road.

Monday, October 20, 2014

DB2 for z/OS Stored Procedures: Native SQL, or Java?

Recently, a mainframe DB2 DBA put a question to me: when should DB2 for z/OS native SQL procedures be used versus Java stored procedures, and vice versa?

Why he asked: the DBA worked with a group of developers who, collectively, could code both of these DB2 stored procedure types. Within this group, individuals tended to go with what they were used to. In other words, the decision to code a native SQL or a Java stored procedure tended to be based more on momentum (e.g., "I used a Java stored procedure last time, so I'll go that route again this time") than on consideration of factors that might make one type of stored procedure more appropriate than the other for a given situation. The DBA wanted to provide the developers with guidelines that would help them to make more informed decisions regarding the use of native SQL versus Java stored procedures. In response to this request, I communicated my take on the matter, and via this blog post I'll share those thoughts with you.

I'll start out with this statement: if I needed to create a DB2 for z/OS stored procedure, I'd go with a native SQL procedure unless I needed to do something that couldn't be done (or couldn't be done in a practical sense) with that type of stored procedure. If I did have a requirement to use an external stored procedure (i.e., one with an external-to-DB2 executable that would run in a stored procedure address space), I'd look at Java as a good choice for the associated programming language, unless I needed to do something that would best be done with a COBOL stored procedure program (more on this momentarily).

Why my "native-first" mind set? Three reasons:


1) Native SQL procedures can be developed (and maintained) by lots of different people. You don't have to be proficient in a programming language such as Java (or COBOL) to create a native SQL procedure. If you know SQL, you can quickly pick up the "control" SQL statements (referring to logic flow control -- statements such as ITERATE, LOOP, GOTO, etc.) that enable coding of a SQL PL routine such as a native SQL procedure ("SQL PL" is short for SQL procedure language, the language used in a DB2 environment to write stored procedures and user-defined functions using only SQL). On top of SQL PL being a pretty easy means of writing DB2 stored procedures, the free and downloadable IBM Data Studio product provides a user-friendly GUI tool for native SQL procedure development and debugging.

2) Native SQL procedures, when called by DRDA requesters (i.e., through DDF), are zIIP-offloadable. More specifically, zIIP-offloadable to the tune of about 60%. That's because they run under the caller's task and a DRDA caller's task is an enclave SRB in the DDF address space. A Java stored procedure, like any external stored procedure, will always run under its own TCB in a stored procedure address space. Yes, execution of that stored procedure's Java code will be zAAP- or zIIP-eligible (the latter via what's called zAAP-on-zIIP, which you get when zAAP-eligible work runs on a system that has zIIP engines but no zAAP engines), but execution of the SQL statements issued by the Java stored procedure will not be zIIP-eligible (you'd actually get a little zIIP offload for these SQL statements, but not what you'd get with a native SQL procedure).

3) Native SQL procedures are where you're likely to see the bulk of DB2 for z/OS stored procedure functionality enhancements. Example: with DB2 10, the XML data type can be used for a stored procedure input or output parameter, but only for a native SQL procedure. Another example: DB2 11 introduced the autonomous stored procedure (it has a separate unit of work from that of the calling process, so that a data change made by an autonomous procedure will persist even if the calling transaction subsequently fails and is rolled back by DB2). Only a native SQL procedure can be autonomous. Yet another example: DB2 11 delivered support for array-type input and output parameters for a stored procedure, but only if it's a native SQL procedure.

When might I use an external stored procedure (such as a Java stored procedure)? I'd go this route if I needed to do something that couldn't be done (either at all, or in a practical sense) with a native SQL procedure. Maybe there is a need for a level of sophistication in the logic of the stored procedure that I can't implement with SQL PL (though in that case I might ask, "How sophisticated does the logic in a data-layer program have to be?"). I might go with an external stored procedure (such as a Java stored procedure) if there were a need to access resources (data or services) outside of DB2 (a native SQL procedure can issue SQL statements, period). Note that even in that case, there would be capabilities available in a native SQL procedure that might provide the functionality required. Keep in mind that native SQL procedures can utilize DB2 functions, which include things like MQRECEIVE and MQSEND if there is a need to interact with WebSphere MQ, and SOAPHTTPNV if I need to access a Web service. I might take the Java path if I wanted to use the same (or nearly the same) Java stored procedure program in a DB2 for z/OS and a non-DB2 relational database environment -- a Java stored procedure likely wouldn't have to be much different in the two environments, whereas a SQL PL routine created for DB2 would have to be re-created in the SQL programming language of a different relational database server in order to function properly in that non-DB2 environment (a native SQL procedure provides portability within the DB2 family of relational database systems).


If I wanted to use an external stored procedure, Java would be a fine choice for the programming language. There was a time, not so long ago, when I was not a fan of Java code running on mainframe servers, but now I'm a proponent of Java in a z/OS environment, as I noted in an entry recently posted to this blog. As pointed out in said blog entry, DB2 11 for z/OS delivered some important enhancements that boost scalability and CPU efficiency for Java stored procedures.

Are there circumstances that would lead me to choose COBOL instead of Java for an external stored procedure? Yes. One scenario that comes to my mind involves invocation of an existing COBOL subroutine. That could be done very efficiently via a COBOL CALL from a COBOL stored procedure (I wrote of a real-world example of this scenario in a blog entry I posted this past spring). I might also lean towards COBOL over Java if, in my organization, there were COBOL-knowledgeable developers but not Java-knowledgeable developers available at the time I needed a DB2 external stored procedure program to be coded.

The bottom-line message is this: SQL PL (for native SQL procedures) and Java (especially in a DB2 11 for z/OS system) are both very viable choices when it comes to development of DB2 stored procedures. Rather than taking a "what we're used to" approach to choosing one versus the other, consider the relative strengths of each language in light of the objectives you have for a given stored procedure. A thoughtful and informed choice is likely to be the right one for you.

Tuesday, September 30, 2014

DB2 for z/OS: DDF, Stored Procedures, and SET CURRENT PACKAGESET

Recently, I was talking with people from a company's IT department about modernizing their mainframe application architecture.Among other things, we discussed increasing the company's use of stored procedures for DB2 for z/OS data access. That prompted some questions around the use of the DB2 SQL statement SET CURRENT PACKAGESET in stored procedures called by DRDA requesters (these being calls that would get to DB2 via the distributed data facility, or DDF). Why did this issue come up? Because the organization's DB2 database had been divided into several schemas (sets of tables) along customer lines, with information pertaining to one set of customers being stored in tables in schema A (e.g., in a table called A.ORDERS), and data for another customer set going into schema B tables (such as B.ORDERS), and data for yet another customer set going into schema C tables, etc. This is not an unusual approach in environments characterized by very large amounts of data that can be relatively easily separated according to some identifier such as customer or account number. Such a mode of data organization can deliver some scalability and availability benefits, and if the number of database "instances" (referring to the different schemas, distinguished by high-level qualifier) doesn't get to be too large (I would want it to be in the single digits), the cost in terms of operational complexity is generally not too high. Does the presence of "n" database instances -- same tables, structurally, in each instance, but with different data -- require the coding of "n" versions of each database-accessing program? No. Programs are coded using unqualified table names in SQL statements, and then the packages associated with the programs are bound into "n" collections, one for each database instance, with the appropriate high-level qualifier provided via the QUALIFIER option of the BIND PACKAGE command. At execution time, application programs can then very easily and efficiently navigate between database instances by pointing to an instance's package collection using the SQL statement SET CURRENT PACKAGESET.

Nothing new here -- this kind of thing has been done for years at plenty of sites. The concern raised during the aforementioned meeting had to do with the use of SET CURRENT PACKAGESET in DB2 for z/OS stored procedures called through DDF. One of the DBAs with the company to which I've referred expressed doubt that SET CURRENT PACKAGESET could be used in this situation. The reason for this DBA's skepticism? He was pretty sure that the value used in a SET CURRENT PACKAGESET statement had to be a collection name in a plan's package list (often called the PKLIST). For DDF-using applications, the plan name always defaults to DISTSERV, and that plan doesn't have a package list; therefore (the DBA believed), SET CURRENT PACKAGESET, when issued from a stored procedure called by a DRDA requester, will fail.

This DBA even had what appeared to be solid evidence backing his contention: he created a stored procedure (it happened to be of the native SQL variety -- more on that to come), called ST_PROC_X, through which a SET CURRENT PACKAGESET = 'COLL_A' statement was issued, and when that stored procedure was called through DDF at DB2 location DB2LOCN, the SET CURRENT PACKAGESET failed with a -805 SQL error code (I've of course changed location, collection, and other names from the actual values used at the DBA's site):

DBRM OR PACKAGE NAME DB2LOCN.COLL_A.ST_PROC_X.0123456789ABCDEF NOT FOUND IN PLAN DISTSERV. REASON 02.

There, see? "Not found in plan DISTSERV." That means the program's DBRM wasn't bound directly into the DISTSERV plan, nor was it in a collection in DISTSERV's package list. And of course that must be the case, because a DBRM can't be bound directly into a plan in a DB2 10 (or later) environment (and even before DB2 10 you couldn't bind any DBRM's into plan DISTSERV), and you can't define any package collections for DISTSERV; therefore, SET CURRENT PACKAGESET is doomed to fail for DRDA requesters. Case closed, right?

Umm, no. I will say, however, that the DBA's conclusion was very understandable, given the text that accompanied the -805 SQL error code. This is one of those cases in which the words associated with a DB2 error code are a bit misleading. The error code text suggests that a -805 is always caused by a DBRM not being found in a plan, or a package not being found in a collection named in a plan's PKLIST. In fact, that is one -- not the only -- cause of a -805. If you look up the -805 in the DB2 for z/OS Codes manual, you'll see another reason for the error that's possible when the '02' reason code is indicated:

"The CURRENT PACKAGESET special register was not set correctly by the application."

I figured that was the problem, and might be so because the stored procedure in question was a native SQL procedure. How's that? Well, an external stored procedure's package can be bound into any collection via the COLLID option of the CREATE PROCEDURE statement. For a native SQL procedure, COLLID is not an option for the CREATE PROCEDURE statement. Where, then, does a native SQL procedure's package go? It goes into a collection that has the same name as the native SQL procedure's schema. The native SQL procedure ST_PROC_X referenced above was created with a high-level qualifier of HLQ_X (again, not the real identifier), so it's package was bound into collection HLQ_X. The stored procedure issued SET CURRENT PACKAGESET = 'COLL_A'. That would cause DB2 to look in collection COLL_A for package ST_PROC_X (same name as the stored procedure) to execute subsequent SQL statements issued by the stored procedure. Package ST_PROC_X was not in collection COLL_A, and that's why the -805 was issued, right?

Wrong on my part. The DBA told me that the ST_PROC_X package had, in fact, been bound into each of the database instance-aligned collections. OK, how had that been accomplished? Via BIND PACKAGE with the DEPLOY option, said the DBA.

And, BINGO, that was the problem. See, the DEPLOY option of BIND PACKAGE is typically used to migrate a native SQL procedure from one DB2 environment to another (e.g., from a test to a production environment). To get a native SQL procedure's package into various collections in the same DB2 environment as the package's "root" collection (i.e., the collection that has the same name as the native SQL procedure's schema), one should use BIND PACKAGE with the COPY option, not the DEPLOY option. When BIND PACKAGE was executed with DEPLOY by the DBA with whom I was working, new consistency tokens for the packages were generated by DB2; thus, the search for package ST_PROC_X in collection COLL_A failed at the consistency token level (the fourth-level qualifier of the fully qualified package name: location.collection.package.token). The DBA freed the packages that had been added to the database instance-aligned collections via BIND PACKAGE with DEPLOY, then copied the ST_PROC_X package from its root collection to the database instance-aligned collections using BIND PACKAGE with COPY, then called the stored procedure, and presto: everything worked like a champ.

So, SET CURRENT PACKAGESET is absolutely valid in a stored procedure (native or external) that is called by a DRDA requester. Just make sure that the stored procedure's package was placed in any collection named by SET CURRENT PACKAGESET, and put the stored procedure in that collection (or collections) the right way -- and that means via BIND PACKAGE with COPY when you're replicating a package across several collections within a given DB2 environment.

Friday, September 26, 2014

DB2 for z/OS: Avoiding zIIP Engine Contention Issues

In an entry posted a few months ago to this blog, I touched on several matters pertaining to mainframe zIIP engines and the use of these processors by DB2 for z/OS. Based on recent experience, I feel that it's important to further highlight a particular zIIP-related issue: zIIP engine contention. Through this post, I want to explain what zIIP contention is, why you want to avoid it, how you can spot it, and what you can do about it.

As, probably, most mainframers know by now, zIIPs (System z Integrated Information Processors) are "specialty engines" aimed at reducing the cost of mainframe computing. They do this in two ways: 1) they cost less than general-purpose engines, and 2) they do not factor into the pricing of mainframe software. Over time, as IBM has made more of the work done on System z servers zIIP-eligible, and as existing zIIP-eligible workloads have grown (particularly those associated with applications that access DB2 for z/OS data through network connections -- commonly referred to as DDF or DRDA workloads), zIIP engine utilization rates have climbed at many sites. That's a good thing, but only up to a certain point. If zIIP engines become too busy, DB2 performance can be negatively impacted. I'll first expand on that point.

What zIIP contention is, and why you want to avoid it

Consider a z/OS LPAR configured with one or more zIIPs. If a zIIP engine is not available when an item of zIIP-eligible work in the system is ready for dispatch (because the zIIPs are busy at that time with other work), that piece of work will be dispatched to a general-purpose engine. There is a slight delay involved in dispatching zIIP-eligible work to a general-purpose engine. That doesn't matter much if only a small amount of zIIP-eligible work ends up being redirected to general-purpose engines. If, however, the degree of what I call "zIIP spill-over" reaches a too-high level, application performance degradation can result.

Indicators of performance degradation caused by zIIP contention

zIIPs have been around for quite a few years now (since the early 2000s, I believe). Why is the issue of zIIP contention only now coming to the fore? Two reasons: 1) it took a while for zIIP utilization at many sites to reach a level at which contention can occur, and 2) the type of work that can utilize zIIP resources recently changed in an important way. This latter point refers to the introduction, with DB2 10 for z/OS, of zIIP eligibility for prefetch read and database write I/Os. Here's why that's important: it can be a pretty good chunk of work (often accounting for the large majority of CPU time charged to the DB2 database services address space, aka DBM1), and it's a type of system task that is very important for the performance of some DB2 applications. If zIIP contention causes delays in the dispatching of prefetch read tasks, run times for prefetch-intensive workloads (such as batch jobs and data warehousing/business intelligence applications) can become elongated. If that were to happen, you might see the effect as a significant increase in a class 3 wait time (so called because the information comes from records generated when DB2 accounting trace class 3 is active) that is labeled as suspension due to "other read I/O" in an IBM OMEGAMON for DB2 accounting long report (the field might be somewhat differently labeled in accounting reports generated by other vendors' DB2 monitor products).

Prefetch slowdown isn't the only thing to look out for. Lately, I've seen data that suggests a link between higher levels of "zIIP spill-over" and higher-than-desired in-DB2 not-accounted-for time for DB2-accessing applications. Not-accounted-for time is a DB2 class 2 time (it's captured in DB2 accounting trace class 2 records) that is probably calculated for you in an accounting long report generated by your DB2 monitor. If your monitor does not calculate this number for you, you can easily figure it yourself: just take average in-DB2 (i.e., class 2) elapsed time for an application, and subtract from that the average in-DB2 CPU time (keeping in mind that this is two numbers: in-DB2 general-purpose CPU time and in-DB2 specialty engine CPU time). Next, subtract out total class 3 suspend time. What's left is in-DB2 not-accounted-for time. Generally speaking, a not-accounted-for time that is less than 10% of total in-DB2 elapsed time for a higher-priority transactional workload (such as many DDF and CICS workloads) does not concern me (a higher level of not-accounted-for time for a batch workload is usually not such a big deal). In times past, a higher-than-desired in-DB2 not-accounted-for time (i.e., more than 10% of in-DB2 elapsed time for a higher-priority transactional workload) was usually an indication that a z/OS LPAR's general-purpose engines were over-committed (as might be the case if they are routinely running at utilization levels of 95% or more). Nowadays, it appears that higher-than-desired in-DB2 not-accounted-for time might also be caused by elevated levels of zIIP contention.

An early-warning indicator

Obviously, you'd like to spot and head off a potential zIIP contention-related performance problem. How could you do that? First and foremost, I'd keep an eye on what I call the zIIP spill-over rate. That's the percentage of zIIP-eligible work that ends up running on general-purpose engines (as mentioned previously, this spill-over occurs when zIIP engines are busy when zIIP-eligible work is ready for dispatch). Depending on the DB2 monitor that you use, this can be easily done. In an IBM OMEGAMON for DB2 accounting long report, you'll see these three class 1 CPU times for a workload (I've condensed them here -- they wouldn't appear one on top of the other in an actual report):

AVERAGE       APPL(CL.1) 
------------  ---------- 
CP CPU TIME     0.012106 

 SECP CPU       0.000166  (A)
SE CPU TIME     0.018054  (B)


The top-most number is the average general-purpose CPU time for the workload. The bottom number is the average zIIP engine CPU time. The middle number is the average amount of general-purpose CPU time consumed in doing zIIP-eligible work. If you label the second and third numbers A and B as I did above, the zIIP spill-over rate is calculated as follows:

A / (A + B)

The figures above show a zIIP spill-over rate of less than 1%. Such a small non-zero value would not cause me concern. What would cause me some concern is a rate that is over 5%. With that much zIIP-eligible work running on general-purpose engines, you could see the beginnings of a negative impact on application performance.

So, at what level of zIIP engine utilization would you see a potentially performance-impacting zIIP spill-over rate? That depends in large part on the number of zIIP engines configured in a z/OS LPAR. As a very rough rule of thumb, I'd say that for an LPAR with two or three zIIP engines, my preference would be to keep the zIIP engine utilization rate below 60%. You could probably go somewhat higher than that for an LPAR with a larger number of zIIP engines. If an LPAR has only a single zIIP engine, I'd want the utilization rate of the zIIP to be 50% or less.

Now, some of you may have heard that zIIP contention can occur at a zIIP utilization rate of about 30%. What that may mean is that zIIP spill-over has been observed on systems with zIIP utilization rates as low as 30% or so. First, I'm thinking that such observations may be associated with systems that have a single zIIP engine. Second, keep in mind that a small but non-zero zIIP spill-over rate may not materially affect application performance (as stated previously, I do not see a zIIP spill-over rate of less than 5% as being a cause for concern).

Doing something about zIIP contention

If you observe a higher-than-desired level of zIIP spill-over in one of your z/OS LPARs, what can you do about it? One obvious response would be to add zIIP capacity to the system (keep in mind that EC12 and BC12 servers can have up to two zIIPs for every general-purpose engine). If that is not feasible in the near term, see if you can take some steps to reduce zIIP utilization. An example of such a step would be a significant enlargement of one or more DB2 buffer pools (assuming that you have sufficient memory to back larger pools). This could reduce zIIP utilization by reducing synchronous reads associated with a DDF workload, and by reducing prefetch read activity (something that became zIIP-relevant starting with DB2 10). Using high-performance DBATs could reduce the overall CPU cost (and thus the zIIP consumption) of DDF-connected applications.

Summing it up

The moral of this story is pretty simple: do NOT think that you can run zIIPs as "hot" as you can general-purpose mainframe engines without encountering application performance issues. In doing capacity planning, anticipate growth in zIIP-eligible work on your z/OS systems (not only because of overall workload growth, but also because new types of work become zIIP-eligible through new releases -- and sometimes new maintenance levels -- of subsystem and operating system software), and aim to keep zIIP capacity ahead of zIIP demand so as to keep the rate of zIIP spill-over low. Calculate the zIIP spill-over rate for your production DB2 for z/OS systems, and track that number -- make it one of your key metrics for monitoring the performance of mainframe DB2 data servers.

Driving up zIIP utilization can be an effective way of enhancing the price performance of a z/OS system -- just don't take it so far as to be counterproductive.