Wednesday, August 25, 2010

DB2 9 for z/OS: an Important Enhancement of Partition-Level Online REORG Functionality

For a number of organizations, a key issue related to DB2 for z/OS Version 8 to Version 9 migration has been the elimination, in a DB2 9 environment, of the BUILD2 phase of partition-level online REORG. The concern with the elimination of BUILD2 had to do with the attendant restriction on the ability to execute online REORGs of multiple non-contiguous partitions of a tablespace in a parallel fashion. The good news that I want to highlight here is the removal of this restriction via the recently released fix for APAR PK87762. In the remainder of this post I'll explain what the BUILD2 phase of online REORG is (or was, from a DB2 9 perspective), the benefit of eliminating this utility phase, and the challenge that BUILD2 elimination posed for some DB2 for z/OS users prior to the availability of the aforementioned fix (the PTF is UK59095).

With Version 8 (and previous versions) of DB2 for z/OS, BUILD2 is the name of one of the phases of an online REORG TABLESPACE executed at the partition level (versus an online REORG of a partitioned tablespace in its entirety). During an online REORG of a tablespace partition, rows in the shadow data set for the partition are relocated relative to their position in the partition's active data set (this, of course, to restore clustering sequence for the partition). If there are any non-partitioned indexes on the tablespace (these are also known as NPIs), the index entries in those NPIs that point to rows in the partition being reorganized have to be updated to reflect the location of the rows in the shadow data set (which will be the active data set when the REORG job completes). BUILD2 is the utility phase in which these index entry corrections are made, and during that phase the logical partition of the NPI(s) that corresponds to the tablespace partition being reorganized is unavailable to application program. This means that INSERT and DELETE operations targeting the partition being reorganized cannot run during BUILD2 (nor can UPDATEs that change the value of a column in a key of an NPI, and SELECTs that would use the NPI to access data in the partition).

If BUILD2 completes in, say, 60 seconds or less, this may not be such a big deal. Even if BUILD2 runs for several minutes because of the number of rows in the partition being reorganized, the situation can be dealt with by causing BUILD2 to execute during a maintenance window. People do this by submitting the partition-level online REORG with a specification of MAXRO DEFER and LONGLOG CONTINUE, to put off execution of the REORG's final phases (including BUILD2). Upon entering the maintenance window, an -ALTER UTILITY command is issued to change MAXRO to a smallish number of seconds (e.g., 300), and REORG soon performs the final "catch-up" application of logged data changes to the shadow data sets, switches the shadow and active data sets, and completes BUILD2 processing.

What if you very rarely have a maintenance window? What if the partition being reorganized holds over 100 million rows, and BUILD2 runs too long to complete during a (for example) one-hour maintenance window? What then? For some DB2 Version 8 users, this meant that online reorganization of partitioned tablespaces could only be run at the tablespace level -- kind of a bummer, as one of the key benefits of tablespace partitioning is the ability to run utilities at the partition level.

So, BUILD2 elimination in the DB2 9 environment is a good thing, but the WAY in which BUILD2 was eliminated created, for some organizations, a new challenge. When a partition-level online REORG is run in a DB2 9 for z/OS system, any NPIs on the partitioned table will be reorganized in their entirety. That means that they have their own shadow data sets to which logged changes to the corresponding active data sets are applied. This makes BUILD2 unnecessary, but it also means that only one partition-level online REORG job can be run against the tablespace at any one time. That posed a problem for users who liked to get parallel reorganization of different partitions of a tablespace by running multiple REORG jobs targeting different partitions of a tablespace at the same time. Suppose, for example, that you want to run parallel online REORGs of partitions 2, 9, and 22 of a partitioned tablespace. In a DB2 Version 8 system, you'd run three separate online REORG jobs concurrently: one for each partition to be reorganized. In a DB2 9 environment, this approach does not work because the first of these jobs to get ahold of the tablespace's NPIs (to REORG these in their entirety) would block out the other jobs. You could not get around this restriction by submitting one online REORG job with a specification of PART 2, 9, 22 because this syntax was invalid. You could specify PART 2:22 and that would result in parallel online REORGs of partitions 2, 9, and 22, but also of all the other tablespace partitions in that range. There is a way, using an undocumented syntax, to run a REPAIR utility job that will flag certain partitions of a tablespace (e.g., 2, 9, and 22) as being in advisory REORG-pending status, after which an online REORG of the tablespace with the SCOPE PENDING option will result in the desired behavior (REORG only partitions 2, 9, and 22, in parallel); however, many DB2 users are understandably reluctant to go with this approach, and it is not one that I'd recommend.

At the IDUG North American Conference this past May, one of the IBM DB2 for z/OS developers mentioned that DB2 10 (now in beta release) would enable parallel online reorganization of non-contiguous partitions of a tablespace (i.e., partitions that aren't next to each other with regard to their sequence in a partitioned tablespace) with a single execution of REORG. That was great news for the future. What's great news NOW is the fix for the DB2 9 APAR I mentioned, PK87762. With this fix applied, DB2 9 REORG will now accept this syntax: PART(2, 9, 22). Not only that, but you can specify both individual partitions and a partition range in the same PART clause, as in PART(2, 9, 20:22). And there's more: now, when a LISTDEF used in an online REORG job references multiple individual partitions belonging to one tablespace, DB2 will execute REORG once for all of the tablespace's partitions referenced in the LISTDEF, versus running one REORG job for each partition in a serial fashion.

BUILD2 made partition-level online REORG not online-enough for some DB2 users, and I was pleased to see it go away with DB2 9. The (now former) inability to get parallel online REORGs of multiple non-contiguous partitions of a tablespace in a DB2 9 environment was seen by more than a few folks as being an irritatingly high price to pay for BUILD2 elimination. That problem has now been addressed. So, if you're planning a migration to DB2 9 and this issue has been a concern for you, cross it off your list. If you're already on DB2 9 and you've been pining for the days of unrestricted parallel partition-level online REORGs, get UK59095 on your system and pine no more.

Thursday, August 19, 2010

DB2 for z/OS Query Parallelism: It's About More Than Response Time

Query CPU parallelism -- the ability to split a query into multiple parts that can be executed in parallel -- made its debut as a mainframe DB2 feature with Version 4 of the product, back in the mid-1990s. To this day, lots of mainframers think of query parallelism as they did when it was brand new: it's a nice way to reduce the elapsed time of what would otherwise be long-running queries in a data warehouse environment. That view is on the mark, but it's overly narrow. I'm here to tell you that query parallelism can be a means of significantly reducing the cost of computing for an operational DB2 workload (by which I mean one associated with run-the-business versus decision-support applications).

What I'm talking about became possible when IBM introduced zIIP processors for its System z servers. zIIPs -- more formally known as System z Integrated Information Processors -- are "specialty" engines that can be used in the execution of certain types of work on a mainframe system. System z users like zIIPs because they lower the cost of computing on the mainframe platform in a double-barreled way: first, they cost less than general-purpose mainframe processors; second, they do not factor into the pricing of software that runs on System z servers (add a general-purpose engine, and your software costs will go up; add a zIIP, and they won't).

At lots of mainframe sites, the utilization of zIIP engines (if they are configured on the System z servers) is driven almost entirely by client-server DB2 work (i.e., by SQL statements that are issued from off-mainframe application servers and routed to the DB2 for z/OS data server through the DB2 distributed data facility, aka DDF). These SQL statements execute under what are called enclave SRBs (SRBs -- short for service request blocks -- are a type of control block that can represent a dispatchable piece of work in a z/OS system). Enclave SRBs, in turn, can be dispatched to zIIP engines (and the zIIP-friendliness of DB2 client-server computing becomes even more pronounced in a DB2 9 environment, thanks to native SQL procedures, which can utilize zIIP resources when invoked through the DB2 DDF).

This is all well and good, but at more than a few sites the client-server DB2 for z/OS workload goes way down when the bulk of the online users go home at the end of the working day. What's left to do at night is a sometimes huge amount of batch work. These jobs may drive the general-purpose engines on a mainframe server to a utilization rate of 90% or more for hours at a time, while zIIP engines on the same box sit there are two or three percent busy. Wouldn't it be great if those zIIPs, with their large reserves of processing capacity, could be put to work running some of the nightly batch programs? WELL THEY CAN, and your ticket for doing that is -- you guessed it -- query parallelism.

You see, when DB2 splits a query, the different parts of the query execute under enclave SRBs, and that means that they can use zIIP MIPS. Queries in batch jobs can be good candidates for parallelization because they often scan lots of data. Making queries in a batch job eligible for parallelization is easy: assuming we're talking static SQL, you just rebind the program's package with a specification of DEGREE(ANY) versus the default of DEGREE(1) -- there's no need to change program code (for dynamic queries, parallelization is enabled by setting the value of the DB2 special register CURRENT DEGREE to 'ANY'). And, parallelization doesn't change the query's result: you'll get the same data back, at a reduced cost because lower-cost zIIP engines helped get the work done.

Want to get started with putting zIIPs to work with your nightly batch load? Here's what I'd do:
  1. Identify your most expensive batch programs with respect to in-DB2 CPU time (also known as "class 2" CPU time). Tools from IBM and other vendors can help you here (an example of an IBM tool that could be used for this purpose is IBM Tivoli Omegamon XE Performance Monitor on DB2 for z/OS -- check out the accounting report and the TOP subcommand).
  2. Pick one of these expensive programs, and run a test. First, see if DB2 would parallelize any of the program's queries if DEGREE(ANY) were in effect. One really easy way to do this is to use the free and downloadable IBM Optimization Service Center for DB2 for z/OS -- for a given query to be analyzed, you click on "Specify Query Context" and select ANY from the drop-down menu for "Current Degree" under "Application Environment." Keep this in mind: DB2 can parallelize parts of a query's execution even if no partitioned tables are accessed (while table partitioning does tend to be a driver of query parallelism, partitioning is not necessarily a prerequisite for parallelism).
  3. If the access plan information obtained in step 2 above shows that DB2 would parallelize one or more queries in the batch program, run a test in a test environment (ideally one with data volumes approximating those in production, but if not that, at least one with catalog statistics that closely match those in production): execute the program as-is, then rebind it with DEGREE(ANY) and run it again. Use your DB2 performance monitor to see how much of the job's CPU time has shifted to a zIIP engine, and how much the job's general-purpose CPU consumption has decreased. Also see for yourself that the program's query results were not changed by parallelization (for example, FETCH counts should be identical if the program processes all rows in the query result sets).
  4. Assuming a positive result from step 3 above, implement the change in production. Keep in mind that this may require modifying your standard program bind procedure, which may specify DEGREE(1) by default -- you don't want a subsequent rebind to inadvertently change DEGREE for the program's package back to 1.
  5. Repeat steps 2 through 4 with another of the identified high-cost batch programs.
A few other things to keep in mind:
  • In a DB2 subsystem that is not dedicated to a data warehouse workload, you may want to set the value the PARAMDEG parameter in the DB2 ZPARM module to a value other than the default of 0, which lets DB2 choose the degree of parallelization for a queries (i.e., the number of pieces into which a given query can be split). A value in the low-to-mid single digits (e.g., 4 or 5) might be a good choice -- you don't want too large a number of split queries in the system when you're running a high-volume batch workload.
  • Keep an eye on the utilization of space in the DB2 EDM pool -- a package bound with DEGREE(ANY) will be larger than the same package bound with DEGREE(1).
  • DON'T rebind ALL batch packages with DEGREE(ANY) in one fell swoop. Be systematic about this. Do a few at a time, and maybe pause when you get the level of zIIP engine utilization on the system where you want it to be.
If you have zIIPs on your mainframe(s), I hope you'll look into the parallelization possibilities to get more out of those lower-cost engines. If you don't have zIIPs because you think you have a workload that can't exploit them -- think again.

People who think of query parallelization only in terms of its potential to reduce query run times are missing another important benefit of the technology: it's a zIIP-utilizer that can save your organization money, with no need to change program code. Gotta like that.

Tuesday, August 10, 2010

Wow II - DB2 Data Sharing Comes to the Linux/System x Platform

Last year, when I was working as an independent DB2 consultant and trainer, I posted a blog entry on IBM's announcement of its DB2 pureScale technology. pureScale brought to DB2 for AIX, running on IBM's POWER6 and POWER7 servers, the advanced shared-data architecture technology that had delivered, for 15 years, unprecedented levels of availability and scalability for users of DB2 running on IBM System z servers. With pureScale, organizations could harness the processing resources of multiple DB2 for AIX servers sharing concurrent read/write access to one database in a clustered-server, shared-disk configuration. Though I had been around DB2 data sharing since its beta release in the mid-1990s, the pureScale announcement struck me as being really big news. "Wow" pretty much summed up my initial reaction, and that's how I opened the aforementioned blog entry.

Less than a year later, it's time for Wow II. A few days ago (August 5, to be precise), at an event in Beijing, IBM took the wraps off of DB2 pureScale for the Intel-based System x server platform running the SUSE Linux operating system (the supported servers are the System x3650 M3, the x3690 X5, and the x3850 X5). AIX and POWER are a great computing combination, delivering high availability, rock-solid reliability, and a particularly rich set of features and functions. That said, the combination of Linux and Intel-based servers continues to advance as a platform for enterprise data-serving. I remember thinking, as I considered the initial pureScale announcement (Wow I), that it would be great to see the technology extended to Linux and System x. What surprises me about the announcement of DB2 pureScale for System x is that it happened so soon after pureScale first made the scene. Yeah, I was hoping for this, but I thought I'd have to wait longer before seeing it.

Linux-on-Intel users, trust me on this one: you now have a data-serving solution that can take your business-critical database systems to heights of up-time and throughput that you may have thought were unattainable. Remember, this technology -- global lock and page cache structures housed in servers that act as shared-memory devices, forming the hub of a configuration that can scale up to dozens of nodes with remarkably low overhead -- implements the same shared-data architecture that has proven itself in IBM parallel sysplex mainframe clusters that have delivered the goods for years, in some of the most demanding high-availability and high-volume application environments in the world. Is it your only shared-data choice in the world of relational database and Linux/Intel servers? No. Is it your best choice? Absolutely. Hands-down. No doubt about it.

Check out DB2 pureScale on System x, and have some Wow time of your own.

Thursday, August 5, 2010

Back to Blue

Well, I've come full circle as a database professional. I was working for IBM in a branch office in Dallas when DB2 was announced back in 1983. A few years later, still in that branch office, I helped some organizations install DB2 Version 1 Release 2 (so, I wasn't quite on the ground floor, with respect to DB2 release experience). I wanted to do all DB2, all the time, and in 1990 that wish was granted when I became part of the mainframe DB2 national technical support team at IBM's Dallas Systems Center. The nineties was a really fun decade for me, work-wise, as I helped organizations all over the world succeed with DB2 -- being part of the roll-out of DB2 data sharing/parallel sysplex technology was especially gratifying.

Great as that situation was, I left IBM in 2000 to become part of the user segment of the DB2 community, joining Atlanta-based CheckFree Corporation (later acquired by Fiserv). As a DB2 user, I was eligible to serve on the Board of Directors of the International DB2 Users Group, and I was presented with that opportunity in 2001 (I was a member of the IDUG Board for seven years, and served as the Organization's President in 2006-2007). I was also invited to be a regular columnist for IBM Data Management Magazine (formerly DB2 Magazine), and I'm in my 11th year of that enjoyable extracurricular activity.


In mid-2007, I again shifted career gears, this time hanging out my own shingle as President of Catterall Consulting. Soon thereafter, I started blogging about DB2, sharing lessons learned in my work as a DB2 consultant and trainer. After a few years, I realized that being part of a larger organization was a better fit for me than going it alone, and when I had the chance to rejoin IBM in a DB2-related capacity, I took it. It feels good to be back at Big Blue. I plan on being here for a long time to come.



The 121 entries in my Catterall Consulting blog are still viewable at http://catterallconsulting.blogspot.com/, and I will continue to respond to comments left by readers of those entries. New posts will appear in this blog. As before, most of what I write will be about DB2 for z/OS, but I'm a big fan of DB2 for Linux, UNIX, and Windows, as well, and I'll occasionally have something to say about some aspect or another of DB2 for LUW.


In entries to come, I hope to provide DB2 information that will be useful to readers. Thanks for stopping by. Please visit again.