Monday, May 14, 2012

Migrating from DB2 for z/OS V8? DO THIS

A lot of organizations have already migrated from DB2 for z/OS Version 8 to a more-current release of DB2 -- either DB2 9 or (in the case of skip-level migrations) DB2 10. For the most part, those migrations from DB2 V8 went quite smoothly; however, some companies making the move from DB2 V8 to DB2 9 or DB2 10 have encountered performance problems related to one thing: the need in DB2 9 and DB2 10 environments for more resources -- disk space, table spaces, buffer pool space -- associated with work file table spaces that have a 32KB page size. A lot of my colleagues and I have been beating this drum for some time, reminding people of the jump in 32KB-page sorting (versus 4KB-page sorting) that is to be expected when going from DB2 V8 to DB2 9 or DB2 10, but the message has apparently not gotten through to everyone, as evidenced by the fact that organizations to this day get caught with too little in the way of 32KB-page sort resources when migrating from DB2 V8 to DB2 9 or DB2 10. So, for those of you who are in the process of migrating from DB2 V8 to a more-current DB2 release, or who are just now completing that process, or who have been on DB2 9 or DB2 10 for a while and have been living with sub-optimal performance caused by inadequate 32KB-page sort resources and perhaps have been unaware of this connection, I'm going to lay out the relevant facts and recommendations in this blog entry. Get this one thing right, and your transition from DB2 V8 will be a smoother one.

Fact: starting with Version 9, DB2 directs more -- MUCH more -- of its SQL sort work to work file table spaces that have 32KB (versus 4KB) pages. By "SQL sort work," I'm referring to sorts that are related to SQL statements (e.g., sorts associated with ORDER BY, GROUP BY, and DISTINCT specifications, and with merge scan join operations), as opposed to sorts associated with IBM DB2 utility execution (these are handled by DFSORT, or by DB2 Sort when that product is installed). In a DB2 9 or DB2 10 environment, if records to be sorted have a length of 100 bytes or more, a work file table space with 32KB pages will be used for the sort operation; otherwise, a 4KB-page table space will be used. This change was implemented for a reason: sorting of longer records is more efficient when a larger work file page size is used.

Fact: starting with DB2 9, work file table spaces are segmented, not simple, and the SEGSIZE (which can't be altered in a DB2 9 CM or DB2 10 CM8 environment) is 16. This can add to the demand for 32KB-page sort space in a DB2 9 or DB2 10 environment. Think about it: suppose that a 200 rows with a length of 200 bytes apiece are to be sorted to satisfy an ORDER BY specification. A couple of 32KB pages should suffice, right? Wrong. With a work file table space SEGSIZE of 16, DB2 is going to use sixteen 32KB pages for this sort. Now, if you're running DB2 9 NFM or DB2 10 CM9 or NFM, don't go rushing to make the SEGSIZE of your work file table spaces something smaller than 16 -- not without reading further. I'll get to this.

Recommendation: have a lot of 32KB-page work file space in your DB2 9 or DB2 10 system. What's "a lot," you ask? That's going to vary from site to site, depending on the usage of the DB2 subsystem (a business intelligence workload might be more sort-intensive than an operational transaction or batch workload) and the volume of sort-driving SQL statement execution. Here's what I tell people: if, in your DB2 V8 environment, you had X amount of 4KB-page work file space, have at least X amount of 32KB-page sort work space in your DB2 9 or DB2 10 system, and consider having 2X amount of 32KB-page sort work space, to be on the safer side. You will likely be able to reduce -- perhaps considerably -- your 4KB-page sort work space once you're running with DB2 9 or DB2 10. At some sites where DB2 9 or DB2 10 is in use, the ratio of 32KB-page work file space to 4KB-page space is 3:1 or 4:1.

Recommendation: have a pretty good number of 32KB-page and 4KB-page work file table spaces with a secondary space allocation quantity of zero. You're probably aware that, starting with DB2 9 for z/OS, the TEMP database is no more. Things that had used TEMP DB space in DB2 V8 (declared global temporary tables and the temporary tables that hold result sets of static scrollable cursors) use space in the work file database in a DB2 9 or DB2 10 environment. It's a good idea to physically separate the two categories of temporary space usage -- SQL-related sorts, view materialization, triggers, etc. on the one hand, and declared global temporary tables and static scrollable cursors on the other -- within the DB2 9 or DB2 10 work file database. This can be accomplished by having some work file table spaces that are DB2-managed with a non-zero secondary space allocation quantity, and some that are either DB2-managed with a secondary space allocation quantity of zero, or user-managed. Depending on the value of the ZPARM parameter WFDBSEP (built into the DB2 10 base code, and provided a couple of years ago for DB2 9 via the fix for APAR PM02528), DB2 will favor (WFDBSEP=NO) or require (WFDBSEP=YES) the use of work file table spaces that are DB2-managed with a non-zero SECQTY value for declared global temporary tables and static scrollable cursors, and will favor or require the use of work file table spaces that are DB2-managed with SECQTY 0 -- or user-managed -- for SQL-related sorts, view materialization, etc. Note that while a user-managed work file table space will be favored for SQL sorts regardless of the secondary space allocation quantity, if you have user-managed work file table spaces these should have a secondary space allocation quantity of zero -- this is better for SQL sort performance.

Obviously, having work file table spaces with a secondary space allocation quantity of zero will mean that these table spaces will not grow beyond "piece" one. That being the case, you should have a fair number of these, of both the 4KB-page and 32KB-page variety. There's not a one-size-fits-all quantity here, but you might want to start with 10-15 32KB-page work file table spaces (and a similar number of 4KB-page table spaces) with a secondary space allocation quantity of zero. How large should the primary space allocation quantity be for these table spaces? That might depend on the availability of disk space at your site. A primary space allocation quantity of just under 2 GB (as recommended in "info" APAR II14587) is good if your disk storage resources will accommodate that specification; otherwise, go for 400-500 cylinders (around 280-350 MB of DB2-usable space). Keep in mind that -- as previously mentioned -- you will likely be able to adjust the amount of 4KB-page work file space downward once you're on DB2 9 or DB2 10 long enough to be past the fallback-possibility stage of migration.

By the way, DB2 9 introduced a REXX exec, DSNTWFG, that can be used to create DB2-managed work file table spaces. The parameters that you supply for this REXX exec are explained in the text of APAR PM17336, the fix for which allowed DSNTWFG to be used for the creation of DB2-managed work file table spaces with SECQTY 0 (previously the REXX exec could only be used to create DB2-managed work file table spaces with a non-zero SECQTY).

Recommendation: start with the default value of NO for the WFDBSEP parameter of ZPARM. This setting establishes a "soft" physical separation of work file space use for SQL sorts (and view materialization and other things) and declared global temporary tables (and static scrollable cursors). With that set-up in place, if space in the work file table spaces that have a secondary quantity of zero is inadequate for the processing of SQL sorts, DB2 can use space in DB2-managed table spaces with a non-zero SECQTY for this purpose. This flexibility serves to provide a "safety valve" (or expansion reservoir, depending on your preferred analogy) for SQL sort work in the subsystem. If you have WFDBSEP=NO and 1) you see your non-zero SECQTY  work file table spaces going into multiple "pieces" (A001, A002, A003, etc. data sets) and 2) you don't have a lot of declared global temporary table usage that would drive this expansion into multiple pieces, chances are the expansion of these table spaces into multiple pieces is being driven by SQL sort "spillover" activity that is occurring because space for that purpose in SECQTY 0 work file table spaces is under-allocated. In that case, consider increasing the amount of space in SECQTY 0 table spaces (of the 4KB-page or 32KB-page variety, as needed). If you don't see evidence of SQL sort "spillover" (or if you have heavy declared global temporary table usage and you want to keep that from spilling over into work file table spaces used for SQL sorts) then you can consider -- carefully -- going with WFDBSEP=YES (understanding that with this specification you'll get negative SQL codes and/or error messages if the space needed for declared temporary tables or SQL sorts is not available in work file table spaces with non-zero SECQTY and SECQTY 0, respectively).

Recommendation: dedicate 4K and 32K buffer pools to your 4KB-page and 32KB-page work file table spaces, and make them relatively large. "Large" is going to vary from one site to another, but given an adequate central storage resource I'd go with at least 10,000 buffers for the buffer pool used for 4KB-page work file table spaces (20,000-30,000 buffers would be better still) and at least 5000 buffers for the pool used for 32KB-page work file table spaces (10,000 buffers would be nice for this pool, if you have the storage). Check out a blog entry I posted a few weeks ago for other recommendations concerning buffer pools dedicated to work file table spaces.

Recommendation: consider -- carefully -- changing the SEGSIZE for your work file table spaces, once you are in DB2 9 NFM or (or DB2 10 NFM if you're doing a skip-level migration from DB2 V8). As I mentioned previously, the default SEGSIZE for these table spaces in a DB2 9 or DB2 10 environment is 16. IF your environment is characterized by a large number of sorts of relatively small result sets (e.g., result sets that could fit into a few 4KB or 32KB pages), a SEGSIZE of 8 (or maybe even 4) could relieve pressure on your work file database, as DB2 would not need to use considerably more pages than needed to hold small sets of rows to be sorted (and keep in mind here that SQL sorts are not just for things like ORDER BY specifications in SELECT statements -- they are also needed for operations such as merge scan joins, and these can involve sorts of many records, depending on the filtering accomplished via application of predicates to outer or inner table rows). On the other hand, IF you are seeing a lot of suspend time due to latch contention on space map pages of work file table spaces (and DB2 traces can be used to check on this), a SEGSIZE of 32 (or even 64) could provide relief. If neither of these situations matches your environment, stay with SEGSIZE 16.

Wrap-up: I hope that the information in this entry will be useful to you. Going from DB2 V8 to DB2 9 or DB2 10 will provide your organization with lots of great new functionality. Along the way, don't stub your toe on the work file database. Understand what's changed in this area, and make the needed adjustments to your configuration.


  1. Robert
    I cannot give you too many details but we are using Microstrategy BI to Z/OS DB2 using DB2connect.

    A couple of times we have had Adhoc BI queries causing all DB2 Users (Batch and Online) to grind to a halt due to what I am lead to believe is maxing out on the space used by the SQL queries generated by Microstrategy. The trouble with end user Adhoc is the efficiency of the SQL cannot be controlled.

    If this is something that has an obvious solution then you can respond through my personal email :-

    George Bullock

    1. Sorry about the delay in responding, George.

      If by "maxing out on the space used by the SQL queries" you mean exhausting work file space on the subsystem, there are several approaches you could take to deal with the situation. Among these are the following (listed in no particular order):

      a) Increase the work file space resource. You could add more work file table spaces in DSNDB07.

      b) It would appear that some of your ad-hoc queries are driving large-scale sorts. If you can identify some of the culprit queries, get the associated access path information (via EXPLAIN or Data Studio or a query analysis tool) and take a look at the parts of the query access plan that involve sort operations (could be associated with nested loop join and/or GROUP BY sort and/or ORDER BY sort, etc.). Then, see if you could take index actions that might eliminate these sorts. You might want to index one or more columns that are not currently indexed. Of course, doing this may not be feasible if the target table(s) already has quite a few indexes -- too many indexes on a table can make insert and delete and some update operations more expensive than you'd like.

      c) Consider creating an operational data store (ODS), if you don't already have one. Generally speaking, this is a copy of certain production tables, created in a separate DB2 subsystem that has its own work file resources. These tables could be updated nightly or, if needed, updated constantly via a "trickle-feed" mechanism (the latter usually takes the form of a data change replication tool). Direct the ad-hos queries to the ODS database, and that will serve to insulate the "operational" DB2-accessing programs from the ad-hoc queries.

      d) This can end up being related to item "c" above. Use the DB2 resource limit facility (RLF) to automatically terminate a dynamic query on the production system that exceeds a specified CPU time threshold. How is this connected to the ODS idea? Well, if ad-hoc query issuers continue to target the production system after you've set up an ODS, and then complain when RLF terminates their long-running queries, explain that the queries can run longer on the ODS. They will then send queries to that system.

      e) Consider implementing a DB2 Analytics Accelerator, and have these long-running dynamic queries automatically redirected to that server (information on the DB2 Analytics Accelerator can be found at