Wednesday, December 27, 2023

Db2 for z/OS: Code Level, Catalog Level, Function Level, and More

In a Db2 for z/OS context, the terms "code level," "catalog level," and "function level" were introduced when the Db2 for z/OS development team went to the continuous delivery mechanism for delivering new product functionality in-between the availability dates of new versions of the DBMS. That was a little over 7 years ago, referring to the general availability of Db2 12 - the first continuous-delivery version of Db2 for z/OS. And yet, there remains a good bit of misunderstanding among some in the Db2 for z/OS user community regarding basic concepts that are of foundational importance in a continuous-delivery sense. That being the case, I'll try to shed some clarifying light on the subject via this blog entry.


Think about an upside-down hierarchy

By "upside-down" hierarchy, I mean one that goes bottom-up from a dependency perspective. At the bottom - the foundation - you have the Db2 system's code level. This has to do largely with the currency of the code in the Db2 system's load library (or libraries, if this is a Db2 data sharing group - it's a best practice in that case for each member subsystem to have its own load library). This is the code that is loaded into memory when the Db2 system is started. You obviously don't have availability of Db2 functionality if that functionality is not present in the Db2 system's code; so, if a Db2 system's code level is 121505 (indicating code that includes functionality delivered up through Db2 version 12 function level 505), you can't create a stored procedure with CREATE OR REPLACE syntax because that syntax was introduced with Db2 12 function level (FL) 507 - by definition, a Db2 12 FL505 code level does not include functionality first delivered by Db2 12 FL507 code.

I mentioned that a Db2 for z/OS system's code level is generally reflective of the currency of the Db2 code in question. Here's what that means: over the course of time, it's normal for the code of a Db2 system (and for other subsystems in a z/OS LPAR and for the z/OS LPAR itself) to be taken to a more-current maintenance level - ideally, this will be done 2-4 times per year, and often the aim is to take the code in the z/OS LPAR (Db2 code included) to a higher RSU level (RSU - short for Recommended Service Upgrade - is a packaging of z/OS and z/OS-related software maintenance that facilitates upgrading the service currency of a z/OS system). This process involves application of PTFs ("fixes," in z/OS parlance) to code in a z/OS system, including Db2 code. Maybe, in the course of one of these service-upgrade procedures, the fix for APAR PH33727 is applied to the system's Db2 code (that which a fix "fixes" is described via the associated APAR, i.e., the APAR describes what is changed or enhanced by the fix). APAR PH33727 is the one associated with Db2 12 function level 510, and when the corresponding PTF gets applied to a Db2 system's code then that system's Db2 code level will go to 121510. Does that mean that functionality delivered through Db2 12 function level 510 is now available in the system? No - there are further dependencies in the bottom-up hierarchy.


Next Db2 level up from code: catalog

The Db2 catalog is the set of tables that basically contain metadata - "data about the data," and about the related Db2 structures (e.g., tables, table spaces, indexes) and other associated database objects (e.g., packages, routines). Sometimes, a Db2 function level introduces new Db2 features that have catalog dependencies - in other words, these are new features that cannot be used until some Db2 catalog changes that support the new features have been effected. Take, for example, Db2 12 function level 509. That function level introduced the ability to specify a data compression type at the individual table space level, or at the partition level for a range-partitioned table space (two data compression types are available in a Db2 for z/OS system - one, which is based on the Lempel-Ziv compression algorithm, is referred to as fixed-length, and the other is Huffman compression). For a Db2 DBA to be able to utilize this feature, the first requirement is the ability to specify COMPRESS YES FIXEDLENGTH or COMPRESS YES HUFFMAN in a CREATE or ALTER TABLESPACE statement. That ability is provided in the Db2 code starting with code level 121509; however, the new forms of the COMPRESS YES clause can't be used unless Db2 can record in the catalog the fact that fixed-length or Huffman compression is used for a given table space or table space partition. That cataloging capability is provided by the COMPRESS_USED column that is added to the catalog table SYSIBM.SYSTABLEPART when the Db2 catalog level goes to V12R1M509 - hence, getting the catalog level to V12R1M509 is required for compression-type specification at the table space or partition level in a Db2 12 system (by the way, "fixed length," in a Db2 data compression context, does not refer to the length of rows in a table - it refers to the length of substitution values in a compression dictionary).

When there is a requirement to take a Db2 catalog to a higher level, that change is accomplished via execution of the Db2 utility called CATMAINT, with a specification of (for example) UPDATE LEVEL(V12R1M509). Note that if a Db2 system's catalog is currently at, say, the V12R1M500 level, it can be taken straight to the V12R1M509 level with one execution of CATMAINT - that one execution of the utility would make the catalog changes associated with level 509, and also the changes associated with other catalog levels between 500 and 509.

Sometimes, a Db2 function level introduces new capabilities that do not require catalog changes. In such cases, the catalog only has to be at the level related to the last preceding function level that did require catalog changes. For example, the features of Db2 12 function level 510 have no catalog dependencies; thus, there us no 510 catalog level, and use of Db2 12 FL510 functionality can be available when the Db2 system's catalog level is V12R1M509 (the description of a function level in the Db2 for z/OS documentation always lets you know if the function level requires catalog changes).

I mentioned in the preceding sentence that Db2 12 FL510 functionality "can be" available when the Db2 system's catalog level is V12R1M509. Does that mean that something other than a catalog level change can be required to use the features of a Db2 function level? Yep - that's exactly what that means.


Next level up: activated function level

For the continuous delivery mechanism for Db2 new-function delivery to work in a practical sense, the "turning on" of a function level's new features had to be made an asynchronous event with respect to the up-leveling of Db2 code that would introduce the features to the Db2 subsystem's load library. If this were not the case - if, instead, a Db2 code level's new features were instantly available once present from a load library perspective - then Db2 for z/OS systems programmers might hesitate to upgrade the maintenance level of a Db2 system out of concern about readiness to provide support and guidance in the use of the new features. That would not be a good thing - z/OS and z/OS subsystems function best when they are at a relatively current level of maintenance.

The means through which adding new features to Db2 code is made asynchronous to having that new code be usable in a Db2 system is the Db2 command -ACTIVATE FUNCTION LEVEL; so, a Db2 system's code level might be 121509, and the system's Db2 catalog level might be V12R1M509, but the previously-mentioned ability to issue ALTER TABLESPACE (or CREATE TABLESPACE) with a COMPRESS YES HUFFMAN specification won't be there until a Db2 administrator has issued the command ACTIVATE FUNCTION LEVEL (V12R1M509). Thanks to the -ACTIVATE FUNCTION LEVEL command, a Db2-using organization can decide when they want the features introduced in a Db2 code level to be usable in their Db2 environment.

So, does the -ACTIVATE FUNCTION LEVEL command put us at the top of our upside-down Db2 continuous delivery hierarchy? Not quite. One to go...


The last level: application compatibility

In a typical production Db2 for z/OS system, there's a lot going on - lots of different applications accessing Db2 for z/OS-managed data, lots of DBA activity related to administering the system, lots of new-program deployment action, etc. In light of that fact, the -ACTIVATE FUNCTION LEVEL command is a pretty big switch. What if the immediate need that an organization has for a given Db2 function level is related to exploitation of a new feature for a single application, or for one particular database administration task? Db2 application compatibility levels provide a way to very selectively exercise functionality that has been newly activated in a Db2 system. Db2 application compatibility levels are managed primarily through a Db2 package bind parameter called APPLCOMPAT (you might want to check out the part 1 and part 2 entries on APPLCOMPAT that I posted to this blog a few years ago). Returning to the previously used example, let's say that a Db2 DBA wants to alter a table space to use Huffman compression. Is it enough for the Db2 system's code level to be 121509, and for the catalog level to be V12R1M509, and for V12R1M509 to be the activated function level? No - that's not enough. The DBA will issue the ALTER TABLESPACE statement with a COMPRESS YES HUFFMAN specification by way of a Db2 package (there is always a package associated with execution of a Db2 SQL statement). That package might be related to one of the Db2-provided programs often used by DBAs to do their work - maybe SPUFI, or DSNTEP2. The package, like all packages, will have an APPLCOMPAT specification. For the ALTER TABLESPACE with COMPRESS YES HUFFMAN to execute successfully, the package through which the statement is issued - DSNTEP2, let's say - must have an APPLCOMPAT specification of not less than V12R1M509.

As this example suggests, a package's APPLCOMPAT value enables a program that issues SQL through the package to utilize SQL syntax that was introduced with a given Db2 function level. That is one purpose of the APPLCOMPAT package bind specification. The other purpose of APPLCOMPAT is to enable a program to get the SQL behavior of an earlier version and function level of Db2 for z/OS, if that older SQL behavior is needed. See, there are times when, going from one version or function level of Db2 to another, the behavior of a SQL statement will change. What does that mean? It means same SQL statement, same data, different result. This kind of change is referred to in the Db2 for z/OS documentation as a SQL incompatibility. There are times when a program executing in a Db2 system with function level X activated needs the behavior that a SQL statement had with a Db2 version or function level that is older than X. APPLCOMPAT can deliver, for this program, that older Db2 behavior. Here's an example: suppose that a DBA named Steve needs to create a non-universal table space in a Db2 system that he administers, and let's say that the activated function level for this system is V12R1M510. It's a fact that, starting with function level V12R1M504, a CREATE TABLESPACE statement can only create a universal table space. Is Steve stuck? No. Steve can create the needed non-universal table space by using a program (we'll say the Db2-provided DSNTEP2) whose package has an APPLCOMPAT value of V12R1M503. What if the DSNTEP2 package at Steve's shop has an APPLCOMPAT value of V12R1M504 or higher? No problem: Steve just needs to make sure that the first SQL statement issued by his DSNTEP2 job is SET CURRENT APPLICATION COMPATIBILITY = 'V12R1M503'; then, a CREATE TABLESPACE statement can be issued to create a non-universal table space (this scenario is described in an entry I posted to this blog in 2020). Note that SET CURRENT APPLICATION COMPATIBILITY can be used (with a dynamic SQL-issuing program) to dynamically take a program's application compatibility level to something below - but not above - the APPLCOMPAT level of the program's Db2 package.

For many Db2 DDF-using client-server applications (applications known as DRDA requesters in Db2-speak), the Db2 packages used will be those related to the IBM Data Server Driver. These packages reside, by default, in a collection named NULLID. If the NULLID packages have an APPLCOMPAT value of X, and some DRDA requester application requires an APPLCOMPAT value that is higher or lower than X, this need is often satisfied by BIND COPY-ing the packages in NULLID into an alternate collection (with the required APPLCOMPAT specification), and then using the Db2 profile tables to automatically direct the DRDA requester application in question to the alternate IBM Data Server Driver package collection. This technique is described in an entry I posted to this blog a while back, and while that entry refers to IBM Data Server Driver packages BIND COPY-ed into an alternate collection with an alternate RELEASE specification, the same thing can be done for IBM Data Server Driver packages that have an alternate APPLCOMPAT specification.

OK, so remember this bottom-up thing:

  1. First, the feature you want to use needs to be present in your Db2 system's code - that's the code level.
  2. The feature you want to use may have a catalog level requirement - that's the catalog level. You can't take the catalog level to X (via execution of the Db2 CATMAINT utility) unless the system's Db2 code level is at least X.
  3. When the code and catalog levels are right for the Db2 feature you want to use, you need to make sure that the appropriate function level has been activated on the Db2 system - that's the activated function level. Function level X cannot be activated unless the Db2 system's code level is at least X and the Db2 catalog level is at least X (or, if function level X has no catalog dependencies, the catalog level has to be at least the level of the last preceding function level that did have catalog dependencies).
  4. For your program to use the Db2 feature of interest, the application compatibility level has to be set as needed - that's done via the APPLCOMPAT value of the program's Db2 package (or by execution of SET CURRENT APPLICATION COMPATIBILITY, if the program issues dynamic SQL statements and if you need to take the application compatibility level lower than the package's APPLCOMPAT value). A package's application compatibility level cannot be set to X unless the activated function level of the Db2 system is at least X.

Checking on all this in your Db2 for z/OS environment

To see the upside-down hierarchical lay of the land in your Db2 environment, issue the Db2 command -DISPLAY GROUP. This output will look something like this (and don't be misled by the word GROUP in the command - this is applicable for a standalone Db2 subsystem as well as to a Db2 data sharing group):

*** BEGIN DISPLAY OF GROUP(DSNPROD ) CATALOG LEVEL(V13R1M100)
                  CURRENT FUNCTION LEVEL(V13R1M100)
                  HIGHEST ACTIVATED FUNCTION LEVEL(V13R1M100)
                  HIGHEST POSSIBLE FUNCTION LEVEL(V13R1M500)
                  PROTOCOL LEVEL(2)
GROUP ATTACH NAME(DSNP)
-----------------------------------------------------------------
DB2        SUB                    DB2    SYSTEM   IRLM
MEMBER  ID SYS  CMDPREF  STATUS   LVL    NAME     SUBSYS IRLMPROC
------- -- ---- -------- -------- ------ -------- ----   --------
DB1P     1 DB1P DB1P     ACTIVE   131503 SYS1     IR1P   DB1PIRLM
DB2P     2 DB2P DB2P     ACTIVE   131503 SYS2     IR2P   DB2PIRLM
-----------------------------------------------------------------
SCA STRUCTURE SIZE:   36864 KB,  STATUS= AC,  SCA IN USE:     1 %
LOCK1 STRUCTURE SIZE: 17408 KB
NUMBER LOCK ENTRIES:   2097152
NUMBER LIST ENTRIES:     21415,  LIST ENTRIES IN USE:           3
SPT01 INLINE LENGTH:     32138
*** END DISPLAY OF GROUP(DSNPROD )

For this 2-member Db2 data sharing system, the code level, highlighted in green, is 131503  (Db2 for z/OS Version 13, function level 503). The catalog level, highlighted in orange, is V13R1M100. The activated function level, highlighted in purple, is V13R1M100. As for a package's APPLCOMPAT level, you can see that via a query of the Db2 catalog table SYSIBM.SYSPACKAGE (check the value in the APPLCOMPAT column for the package's row in the table).

I hope that this information will be useful for you. The end of 2023 is around the corner. I'll post more in '24.

Wednesday, November 29, 2023

Db2 13 for z/OS: Autobind Phase-In

Db2 13 function level 504 became available last month (October 2023), via the fix for APAR PH54919. One of the new capabilities delivered with FL504 is something called autobind phase-in. I like that new feature a lot, and I think you will, too - especially if you're a Db2 for z/OS DBA. In this blog entry I'll explain what autobind phase-in is, why it's a very welcome addition to Db2 functionality, and how you can get ready to leverage the feature, even before you've activated function level V13R1M504.

First, a shout-out to my coworker Dengfeng Gao, a member of the IBM Db2 for z/OS development team. Dengfeng had a lot to do with making autobind phase-in a reality, and she provided me with much of the information I'm now passing along to you. Thanks, Dengfeng!

OK, to begin the story...


The way things were

Way, way back (early 1990s, as I recall), Db2 for z/OS introduced packages. For a Db2-accessing program that issues static SQL statements, you can think of the associated package as being, in essence, the compiled and executable form of the program's SQL statements (what distinguishes static SQL statements: they are prepared for execution prior to being issued by a program, via a Db2 process known as bind). When a static SQL-issuing program executes (this could be, for example, a CICS transaction, or a batch job, or a stored procedure, or a Db2 REST service), the program's package is allocated to the Db2 thread being used by the application process, and the part of the package corresponding to a particular SQL statement is executed when it's time to run that SQL statement.

SQL statements, by and large, reference Db2 tables; thus, packages are dependent on the tables referenced by SQL statements associated with the packages. Packages are also dependent on database objects that are not referenced in SQL statements (indexes are a prime example), when those objects are part of a SQL statement's access plan (i.e., the paths and mechanisms by which data targeted by a SQL statement will be accessed - for example, via a nested loop join that will employ certain indexes on the outer and inner tables). The dependencies of packages on database objects (tables, table spaces, views, indexes, etc.) are recorded in the SYSIBM.SYSPACKDEP table in the Db2 catalog.

Sometimes, a database object on which a package depends is changed in a way that requires regeneration of the package; or, an object on which the package depends (such as an index) might be dropped; or, a privilege needed by the package's owner might be revoked. In such situations, the package in question is marked as "invalid" by Db2 (such a package will have a value of 'N' in the VALID column of the SYSIBM.SYSPACKAGE catalog table). When that happens (in a pre-Db2 13 FL504 environment), the package cannot be executed again until it is regenerated by Db2. That regeneration could be accomplished through a REBIND command issued for the package by a DBA (or issued from a batch job coded by a DBA); but, what if there is a request to execute an invalidated package before that package has been regenerated through a REBIND command? In that case, Db2 will automatically regenerate the package, and that process is called autobind (it's sometimes referred to as auto-rebind).

When autobind happens for a package (again, we're talking about a pre-Db2 13 FL504 environment), it can be disruptive for the application(s) that drive execution of the package. This disruption can take several forms:

  • The application process whose request for execution of an invalidated package triggered the autobind has to wait until the autobind completes.
  • If another application process also requests execution of the invalidated package before the first autobind completes, that will result in a second attempt to autobind the package. That second attempt will have to wait, because it requires a lock on the package held by the first autobind process; thus, this second requester of the package will also sit and wait (if the first autobind finishes successfully and the second requester has not timed out in the meantime, the second requester will use the package as regenerated by the initial autobind process).
  • If the autobind fails (this could happen as the result of an authorization issue, among other things), the package will be marked as "inoperative" by Db2 (indicated by the value 'N' in the OPERATIVE column of the package's row in SYSIBM.SYSPACKAGE). In that case, any attempt to execute the package will fail until the package is explicitly rebound (usually, by a DBA).


This not-good situation changes dramatically (for the better) with Db2 13 FL504 autobind phase-in functionality. Before getting to that, I'll cover some prep work in which DBAs will want to engage.


Laying the autobind phase-in groundwork: a new catalog table, and a new BIND/REBIND option

When function level 500 has been activated in a Db2 13 system, the CATMAINT utility can be executed to take the catalog level toV13R1M501. When that happens, some new tables get added to the catalog. One of those new catalog tables is SYSIBM.SYSPACKSTMTDEP. As the name implies, Db2 will use this table to record static SQL dependencies on database objects at the statement level. Does that just happen? Nope - and this is where DBA action comes in.

When function level V13R1M502 has been activated, new packages can be bound - and existing packages can be rebound - with the new DEPLEVEL option. If you bind or rebind a package with DEPLEVEL(STATEMENT) then Db2 will record statement-level dependency information in the SYSPACKSTMTDEP catalog table, in addition to recording package-level dependency information in SYSPACKDEP (if you bind or rebind with a specification of DEPLEVEL(PACKAGE), it'll be business as usual - only package-level dependency information will be recorded in the catalog).

Would you like to make DEPLEVEL(STATEMENT) the default for package BIND and REBIND actions? If so, set the value of the ZPARM parameter PACKAGE_DEPENDENCY_LEVEL to STATEMENT.

Is there any downside to binding or rebinding packages with DEPLEVEL(STATEMENT)? A small one (in my opinion): because of the extra work of recording statement-level dependency information in SYSPACKSTMTDEP, binding or rebinding with DEPLEVEL(STATEMENT) will somewhat increase elapsed and CPU time for the bind/rebind operation. My expectation is that in a typical production Db2 for z/OS system, CPU consumption related to BIND and REBIND activity is a very small fraction of total CPU consumption.

Is there an upside to binding and rebinding packages with DEPLEVEL(STATEMENT)? Oh, yeah...


Enter autobind phase-in

Once a Db2 13 system's activated function level. is V13R1M504 or higher, this is what happens when a package that has been bound or rebound with DEPLEVEL(STATEMENT) is invalidated: the first request to execute the package following invalidation (assuming the invalidated package wasn't explicitly rebound before that execution request) will trigger an autobind of the package.

"Wait," you might think. "Isn't that the same thing that happened before the advent of autobind phase-in?" Yes, but the autobind itself takes a very different path, and has a very different workload impact, versus the prior process. To wit:

  • The package will be regenerated in the background. The process that requested execution of the invalidated package will be allowed to execute the package - it will not have to wait for the autobind to complete.
  • When the invalidated package is executed, statements that were not invalidated by the action that invalidated the package (e.g., an ALTER of a table that is referenced by some - but not all - of the package's statements) will continue to execute as they did before the invalidation of the package.
  • Also when the invalidated package is executed, statements that were invalidated by the (for example) ALTER action will be incrementally bound when issued by the associated program. This means that they will be dynamically prepared for execution, and that will mean a temporary additional CPU cost (temporary until the in-the-background autobind completes the regeneration of the package that had been invalidated), but the statements will be executed.
  • And if, before the in-the-background autobind completes, there is a second request to execute the invalidated package, will that trigger a second autobind action? Nope - the one autobind is for any and all package requesters. That second requester will be allowed to execute the invalidated package, just as was the case for the requester that triggered the in-the-background autobind - still-valid statements will execute as usual, and invalidated statements will be incrementally bound and then executed.
  • When the in-the-background autobind has finished its work, the newly regenerated package will be phased into use, in much the same way that the rebind phase-in functionality introduced with Db2 12 FL505 phases a newly rebound package into use: the first request for execution of the package following completion of the in-the-background autobind will get the regenerated (and now valid) package. Eventually, processes that had been executing the previous instance of the package (the instance that had been invalidated) will be done with that, and all processes will be using the regenerated package when they request its execution.
  • If the in-the-background autobind fails, will the invalidated package be marked inoperative (with attendant error situations for processes that request execution of the package)? Nope. In that case, the package will be marked with rebind-advisory status ('R' in the OPERATIVE column for the package's row in SYSPACKAGE). The package can still be executed (as described above: as-usual for not-invalidated statements, incremental bind for invalidated statements), but an explicit REBIND is recommended to get the package regenerated and back into a valid status.


Bottom line: with autobind phase-in, autobind activity will have hugely less impact on throughput and service levels for applications that execute packages that have been invalidated.

Note that the above-described much-better autobind process applies only to packages that have been bound or rebound with DEPLEVEL(STATEMENT) - and you can start doing that (as previously mentioned) once you've activated function level 502 in a Db2 13 system.

One other item of information: the in-the-background autobind done for autobind phase-in will execute in access-path-reuse mode - in other words, Db2 will reuse the access paths previously utilized for the package's SQL statements, if that can be done (it of course could not be done for all statements if, for example, package invalidation resulted from the dropping of an index on which some of the package's SQL statements depended). The same goes for the incremental bind of invalidated SQL statements when the invalidated package is requested for execution before the in-the-background autobind has completed - access paths will be reused if possible.

OK, so if you've gotten to Db2 13 at your site, give major consideration to rebinding packages (and binding new packages) with DEPLEVEL(STATEMENT), once function level V13R1M502 or higher has been activated; and, look forward to a much more application workload-friendly autobind process when you get function level V13R1M504 activated.

Monday, October 30, 2023

Db2 13 for z/OS: A New Means for Managing RELEASE(DEALLOCATE) Packages

It has long been understood by many Db2 for z/OS DBAs that a combination of the RELEASE(DEALLOCATE) package bind specification (especially for frequently executed Db2 packages that consume little CPU time per execution) and persistent threads can significantly reduce in-Db2 CPU time for Db2-accessing applications (a "persistent thread" is one that persists through commits - examples include CICS-Db2 protected threads, the threads between IMS wait-for-input regions and Db2, the Db2 threads associated with batch jobs, and high-performance database access threads, aka high-performance DBATs). The CPU efficiency benefit of RELEASE(DEALLOCATE) + persistent threads comes mainly from avoiding the cost of constantly releasing the package in question (i.e., separating it from the thread) at a commit point, only to reallocate it to the thread when it is subsequently (often, very soon - maybe a fraction of a second later) again requested for execution (some additional CPU savings are achieved via retention across commits of "parent" locks acquired in the execution of the package - these are table space- or partition-level locks, and they are almost always non-exclusive in nature).

Always nice to get a Db2 workload CPU efficiency boost, but those CPU savings came at one time with several "flip side" concerns. One of those concerns - conflict with Db2 utilities caused by retained parent locks - was addressed a long time ago (back in the mid-1990s) with the advent of the drain locking mechanism that utilities can use to gain exclusive access to a database object. Another concern from days past had to do with virtual storage constraint - the combination of RELEASE(DEALLOCATE) and persistent threads causes said threads to consume more virtual storage, and when that virtual storage was within the quite-limited confines of the EDM pool, there was a real risk of that getting filled up and causing application failures if one were not careful. Thankfully, that virtual storage-related risk was eliminated with Db2 10, when the virtual storage space used for allocation of packages to threads for execution moved from the EDM pool to above-the-bar agent local pool space, of which there is a very large quantity.

That left us with one more operational challenge associated with the RELEASE(DEALLOCATE) + persistent threads combo: conflict between in-use packages and processes that need to either rebind or invalidate a package. See, a package can't be rebound or invalidated when it is in-use, and a RELEASE(DEALLOCATE) package allocated for execution to a persistent thread is considered by Db2 to be continuously in-use until the thread is terminated, and that could be a while. This last operational challenge was partly addressed with the rebind phase-in functionality that was introduced with function level 505 of Db2 12 for z/OS. OK, great - you can successfully and non-disruptively rebind a package even when the package is in-use at the time of the issuance of the REBIND PACKAGE command; but, what about the situation in which a package needs to be invalidated, perhaps as a result of execution of an ALTER statement targeting an object on which the package is dependent (or as a result of an online REORG that materializes a pending DDL change)? Db2 11, in new-fucntion mode, provided an assist in this case: Db2 gained the ability to detect when a package-invalidating action was being blocked by a dependent package bound with RELEASE(DEALLOCATE) and allocated to a persistent thread - detecting that situation, Db2 could automatically, dynamically and temporarily change the package's behavior to RELEASE(COMMIT) at its next commit point. This enhancement, though welcome, did not fully eliminate the problem.

Why did Db2 11's automatic detection and reaction to a RELEASE(DEALLOCATE) package being allocated to a persistent thread not totally resolve the blocked package invalidation problem? Two reasons:

  1. The persistent thread to which the RELEASE(DEALLOCATE) package is allocated may be outside of Db2 - being, perhaps, in a between-the-last-and-the-next-transaction situation (this can particularly be an issue for high-performance DBATs).
  2. There might be several different dependent packages bound with  RELEASE(DEALLOCATE), and copies of these various packages could be allocated to a large number of threads, such that Db2 is not able to separate all the packages from all those threads in time to keep the package-invalidating action from timing out.
For these situations, Db2 13 provides the solution in the form of a new option that can be exercised via the profile tables (SYSIBM.DSN_PROFILE_TABLE and SYSIBM.DSN_PROFILE_ATTRIBUTES). The new option, available when Db2 13 function level 500 has been activated, comes in the form of a new keyword for the DSN_PROFILE_ATTRIBUTES table: RELEASE_PACKAGE. This keyword has one available associated ATTRIBUTE1 value: COMMIT, and specification of that value will cause Db2 to override RELEASE(DEALLOCATE) behavior for the package (or packages) associated with a profile to RELEASE(COMMIT) behavior, until further notice. And, this new profile table option can be used for local-to-Db2 applications (e.g., CICS-Db2 or IMS TM-Db2 or batch Db2) as well as for DDF-using applications (the profile tables had previously, essentially, been relevant only to DDF-using applications).

Let's consider a use-case scenario to illustrate exploitation of this Db2 13-delivered capability. Suppose you're a Db2 DBA, and you need to issue an ALTER TABLE statement that will invalidate several packages, in collection COLL_A, that are bound with RELEASE(DEALLOCATE) and are executed by way of persistent threads. Not wanting this ALTER TABLE statement to time out due to conflict with the dependent RELEASE(DEALLOCATE) packages, you take this approach (and we'll assume that this is a Db2 13 system with function level 500 or higher activated):

  1. Sometime prior to the time you want to issue the ALTER TABLE statement (maybe 15 minutes ahead of that time, as an example), you insert a row in DSN_PROFILE_TABLE to identify a profile that is associated with the collection COLL_A (you could also do this for a specific package in COLL_A, but we'll do this for the whole collection in this case). In doing this, you put the value 'Y' in the PROFILE_ENABLED column, to let Db2 know that this profile (and its associated attributes) is "live" (i.e., in effect).
  2. The just-created profile, like all profiles, has an ID associated with it (an integer value). We'll say that the ID for this COLL_A-related profile is 5. In the DSN_PROFILE_ATTRIBUTES table, you insert a row for profile 5. In that row, you specify 'RELEASE_PACKAGE' for the KEYWORDS column value, and 'COMMIT' for the ATTRIBUTE1 value. For the ATTRIBUTE2 column you specify a value of 1, because (in this example) you want the RELEASE(DEALLOCATE) override action to apply only to local-to-Db2 processes (a value of NULL in the ATTRIBUTE2 column would indicate that the RELEASE(DEALLOCATE) override action is to be taken only for DDF-using processes, and a value of 2 would mean, "Take this action for all related processes, whether local-to-Db2 or involving DDF").
  3. You issue the Db2 command -START PROFILE, so that Db2 will load the information in the profile tables into memory. Db2 sees that profile 5 (among, possibly, others) is enabled, and takes action: every time a package in COLL_A is loaded for execution, it will be treated as though bound with RELEASE(COMMIT), even if RELEASE(DEALLOCATE) had been specified for the most recent bind or rebind of the package.
  4. Because you took these steps 15 minutes prior to the time for issuing the ALTER TABLE statement, Db2 had plenty of time to switch to RELEASE(COMMIT) behavior for every instance of a RELEASE(DEALLOCATE) package in COLL_A that is allocated for execution to a persistent thread. You issue the ALTER TABLE statement, and it succeeds because there are no dependent packages bound with RELEASE(DEALLOCATE) and allocated to persistent threads to block execution of the statement. Note that the application workload associated with the RELEASE(DEALLOCATE) packages continues to execute - just with RELEASE(COMMIT) behavior in effect for those packages. That means you temporarily do without the CPU efficiency benefit of RELEASE(DEALLOCATE) for the associated application(s).
  5. With the ALTER TABLE statement having been successfully executed, you update the row for profile 5 in DSN_PROFILE_TABLE to have 'N' in the PROFILE_ENABLED column, to show Db2 that this profile (and its attributes) is no longer in effect. A subsequent issuance of the -START PROFILE command lets Db2 know of this profile status change by re-loading the profile table information into memory.
And that's it. A problem that might have been standing in your way is taken care of, easily and non-disruptively.

Of course, this story is not yet over. With the ALTER TABLE statement having been successfully executed, packages dependent on the table are invalidated. What happens after that? The invalidated packages will be auto-rebound by Db2 when next requested for execution, if you don't explicitly rebind them before that happens. Db2 13 function level 504, which came out just a few days ago, delivers big news on the auto-rebind front. I'll post a blog entry on that enhancement within the next few weeks.

Wednesday, September 27, 2023

Db2 for z/OS: Two Stories of Temporary Tables

Db2 for z/OS provides, for your use, two different temporary table types: declared and created. I recently worked with Db2 for z/OS people at a couple of different sites, providing assistance in understanding and effectively utilizing Db2 temporary tables, and I think information related to these experiences might be useful to folks in the larger user community - thus this blog entry. The first story points up an advantage of declared global temporary tables, while the second illustrates advantageous use of a created global temporary table. I hope that what follows will be of interest to you.


Db2 declared global temporary tables - the index advantage

A Db2 declared global temporary table (DGTT) is so-called because it comes into existence when it is declared - usually in an application program. There can be plenty of reasons for a program to declare a temporary Db2 table. For example, an application might need to get a preliminary set of result set rows from Db2 and then operate further on that data - easily done by inserting the result set rows into a declared temporary table and subsequently issuing various SQL statements that target the DGTT.

A Db2 for z/OS DBA sent me the other day some information about a use case at his site that made a declared global temporary table the right choice. Because the temporary table in this case was going to be fairly large, and because DELETE statements targeting the table would be coded with predicates, there would be a performance advantage to the use of a declared versus a created global temporary table (CGTT): an index can be defined on a DGTT, but not on a CGTT (this and other differences between declared and created global temporary tables are listed in the Db2 for z/OS online documentation).

So, with the available option of indexing, DGTTs will generally be preferred over CGTTs, right? Not necessarily...


When a CGTT can majorly reduce elapsed and CPU time for a process

The second Db2 temporary table story I'll relate came to me by way of a Db2 for z/OS systems programmer who works for a financial services organization. In this case, a z/OS-based COBOL batch program was retrieving data from a VSAM file and inserting related information into a Db2 temporary table. Rows were then SELECTed from the temporary table for further processing, after which the rows in the temporary table were deleted (via a predicate-less DELETE). These insert/retrieve/delete actions involving the temporary table were repeated about 300,000 times in an execution of the batch job. At first, the COBOL program declared and utilized a declared global temporary table. The problem? That approach did not perform as needed: the job consumed around 20 minutes of CPU time, and the SQL activity drove approximately 23 million GETPAGE requests (a GETPAGE happens when Db2 needs to examine the contents of a page of a database object, and GETPAGE activity is a major factor in the CPU cost of SQL statement execution). The Db2 sysprog also noticed that the batch process generated a lot of PREPARE activity (referring to the dynamic preparation of SQL statements for execution by Db2 - something that can significantly add to an application program's CPU consumption).

To try to reduce the CPU cost of the batch job, the Db2 team at this financial services organization switched from using a DGTT to a "permanent" (i.e., a "regular") Db2 table. Performance indeed got way better: GETPAGE requests dropped by over 70%, and CPU time for the job went from about 20 minutes to about 4 minutes. Why the big drop in GETPAGEs and CPU time? Probably this had to do with elimination of expensive SQL statement preparation activity. See, you might think that the SQL statements hard-coded in your COBOL (for example) program are all static ("static" SQL statements are prepared by Db2 for execution prior to program execution, via a process known as BIND), but when those statements refer to a DGTT they have to be dynamically prepared for execution when issued by the program because there is no definition of a DGTT in the Db2 catalog.

This big improvement in CPU efficiency notwithstanding, there was a side-effect of the switch from a DGTT to a permanent table that did not sit well with the Db2 team at the financial services company: as previously noted, the set of SQL statements targeting (initially) the DGTT and then the permanent Db2 table involved a delete of all rows in the table, and that happened 300,000 times in the execution of the batch job. When a permanent table was used in place of the DGTT, these 300,000 mass DELETEs (a mass DELETE is a DELETE without predicates) caused 300,000 rows to be inserted for the permanent table in the SYSCOPY table in the Db2 system's catalog. Running a MODIFY RECOVERY utility job to clear those rows out of SYSCOPY, and having to take an image copy of the permanent table to keep it from going into COPY-pending status, were viewed as significant hassles by the Db2 team. Was a still-better way forward available to them?

Indeed so. I suggested going with a created global temporary table. [Like a permanent table, a CGTT is defined by way of a CREATE statement, and there is information about a CGTT in the Db2 catalog. When a program references the CGTT it gets its own instance of the CGTT which (like a DGTT) is physically provisioned in the Db2 work file database.] The Db2 team did that, and the results were very positive. CPU time for the job - originally about 20 minutes with the DGTT and then about 4 minutes with the permanent table, went down to just over 2 minutes with the CGTT (as with the permanent table, no SQL statement dynamic preparation was needed, thanks to the CGTT being defined in the Db2 catalog); and, there were no inserts into SYSCOPY in association with the repeated mass DELETEs (same as with the DGTT); and, there was no need for an image copy of the CGTT because the instance of the table goes away automatically when the process using the table completes (same as with the DGTT). So, the CGTT in this case provided the advantages of a DGTT and of a permanent Db2 table, minus the relative disadvantages of those options (the dynamic statement preparation costs of the DGTT, and the mass-DELETE-related SYSCOPY inserts and the image copy requirements of the permanent table).


The bottom-line message

Declared and created global temporary tables both have their places in a Db2 for z/OS application environment. When considering the use of a Db2 temporary table for an application, be careful not to jump too quickly to a DGTT versus a CGTT decision (though sometimes there will be only one choice - as when, for example, UPDATE access to the temporary table is needed - something that can only be done with a DGTT). Consider the DGTT versus CGTT choice in light of the particulars of the use case, and choose accordingly. A thoughtful choice can yield a substantial performance advantage - so use your head.

Thursday, August 31, 2023

Db2 for z/OS: An Important Difference Between Data in Memory and Data on Disk

For the past several years, I've tried to post an entry per month to this blog. Sometimes, it will be very near the end of a month, and I haven't posted anything since the previous month, and I don't have any good ideas for a new entry. Then, I'll have an exchange with someone - could be via email, a phone call, a face-to-face discussion, whatever - and BOOM! Right there I'll find the inspiration for a blog entry. That happened again yesterday - the day before the last day of the month. Whew!

Here's what happened: in an email message, an IT professional recounted to me a situation that had her perplexed. The scenario: 11,000 rows of data were loaded into a table using the IBM Db2 for z/OS LOAD utility. Immediately after the completion of that load job, a program that updated 8 of those just-loaded rows executed and ran successfully to completion. Right after that, an unload job for the table in question was executed. This unload was performed using an IBM software product called Db2 High Performance Unload for z/OS, or HPU, for short (see https://www.ibm.com/docs/en/dhpufz/5.1.0?topic=documentation-db2-high-performance-unload-overview). HPU has two modes of operation: it can perform an unload by operating directly on the VSAM data set(s) associated with the target table, or it can do the unload through Db2, in which case the data is accessed in memory (i.e., in the buffer pool to which the table's table space is assigned). This unload was done in the former of these modes - operating directly on the VSAM data set(s) associated with the table's table space. The result of the unload surprised the person who emailed me. How so? Well, the unload was done using a predicate (the WHERE clause that you might see in a query), and the update program that ran between the load (of the 11,000 rows) and the unload changed values in a way that should have caused 8 of the 11,000 loaded rows to be filtered out by the unload process's predicate (the other 10,992 rows would be qualified by the predicate). The person who emailed me expected 10,992 records in the unload data set, but there were in fact 11,000 rows in that data set. The updates that should have caused 8 rows to be not-qualified by the unload process's predicate were committed before the unload job ran, so why was this update action not reflected in the contents of the unload data set? Consternation increased when another unload of the table, executed a few hours later (again, with the unload process using a predicate and operating directly on the table's associated VSAM data set(s)), generated an unload data set that did contain the expected 10,992 rows.

What in the world was going on here?

Here's what was going on: this all has to do with a big difference between a committed data change (which has relevance for Db2 data-in-memory) and an externalized data change (which relates to Db2 data-on-disk). What's important to know is that Db2 for z/OS does not externalize data changes (i.e., does not write changed data to the associated VSAM data set) as part of commit processing. Instead, database write I/O operations (to externalize data changes to VSAM data sets on disk) are done in a deferred way (and usually asynchronously, at that). This aspect of Db2's operation is critically important to scalability when it comes to data-change operations (e.g., INSERT, UPDATE and DELETE). If Db2 had to write changed pages to disk at commit time, data-change throughput would be majorly impacted in a negative way. In the scenario described above, the first unload generated by HPU (done right after the programmatic update of 8 of the 11,000 rows previously LOAD-ed into the table), operating directly on the table space's underlying VSAM data set(s), did not reflect the post-LOAD update of the 8 rows because the page(s) changed by the updating program were not written to the underlying VSAM data set(s) at commit time. The changed page(s) were externalized later by Db2 via deferred write processing, and that is why the second unload process, also operating directly on the table space's VSAM data set(s), reflected the aforementioned updates of 8 of the 11,000 table rows.

If Db2 deferred write action did eventually get the changed pages (associated with the updating of 8 of the table's rows) written to the associated VSAM data sets on disk - and it did - then what caused that deferred write action to happen? Usually Db2 deferred write operations for a given buffer pool are driven by one of two deferred write thresholds being reached for the pool. The deferred write queue threshold (abbreviated as DWQT) is expressed as a percentage of the total number of buffers allocated for a pool that are occupied by changed-but-not-externalized pages (the default value is 30), and the vertical deferred write queue threshold (VDWQT) is expressed as a percentage of the pool's buffers that are occupied by changed-but-not-externalized pages that belong to a particular data set (the default value is 5). Whenever either of those limits is reached (and it's usually the VDWQT limit), deferred write activity is triggered. The deferred write I/Os, by the way, are generally multi-page in nature (multiple pages written to disk in one I/O operation), and that is good for CPU-efficiency on a per-page basis. The CPU time associated with database writes (usually not much - I/Os don't require many CPU cycles) is charged to the Db2 database services address space (aka the DBM1 address space).

What about synchronous database write actions? Are they also deferred? Yes, they are - they're just triggered by something besides deferred write queue thresholds being reached. In the case of synchronous writes, the trigger is Db2 checkpoint processing. How that works: when Db2 executes a system checkpoint (which it does, by default, every 3 minutes), it notes all pages in each buffer pool that are in changed-but-not-yet-externalized status. When the next system checkpoint rolls around, Db2 checks to see if any of the changed-but-not-yet-externalized pages noted at the last checkpoint have still not been written to disk. If there are any such pages then they will be synchronously written to disk as part of checkpoint processing. Here, "synchronous" means that Db2 will immediately start writing those pages to disk, and it will continue to do that until they are all externalized.

OK, back to the story that prompted this blog entry. Is there a way that the initial HPU unload (the one executed very shortly after the programmatic update of 8 of the 11,000 rows LOAD-ed into the table) could have generated an unload data set with the desired 10,992 rows? Yes. In fact, there were at least two options for getting that done. One option would be to execute the Db2 QUIESCE utility for the table's table space prior to running the HPU unload. This would cause Db2 to write all changed-but-not-yet-externalized pages of the table's table space to disk, and then an HPU unload operating directly on the table space's VSAM data sets would have reflected the update of the 8 rows.

The second option would be to have HPU do the unload through Db2, as opposed to operating directly on the table space's underlying VSAM data sets - this is something that can be done through an HPU keyword. That, in turn, would have caused the HPU unload to be accomplished using data in memory (i.e., in the table space's assigned buffer pool) - any of the to-be-unloaded pages that were not already in memory would have been read into memory to as part of the unload process. This approach would have reflected the programmatic update of the 8 rows because those updates had been committed, and Db2 data-in-memory is always in a transactionally consistent state (any in-memory data that is not transactionally consistent because of an in-flight - that is, not-yet-completed - data change operation is blocked from access by X-type locks, taken at a page or a row level, that are not released until the data changes in question are committed).

Which of these options would you choose? It would depend on what is most important for you. The QUIESCE option would allow the HPU unload to operate directly on the VSAM data set(s) associated with the table space, and that would yield a CPU efficiency benefit, but the QUIESCE itself could be at least somewhat disruptive for applications accessing the target table. The "through Db2" option would not disrupt any application processes that were accessing the table at the time, but it would cause the HPU unload operation to consume some additional CPU time.

By the way, if you're wondering, "If committed data changes are written to disk in a deferred way, how is loss of committed data changes prevented in the event of an abnormal termination (i.e., a "crash") of the Db2 subsystem that happens when there are changed-but-not-yet-externalized pages in memory?" Worry not - data recorded in the Db2 transaction log is used to process those "pending writes" as part of the "roll-forward" phase of Db2 restart processing following a subsystem failure.

One more thing: the mechanics of all this are different in a Db2 data sharing environment (involving group buffer pool writes and associated castout operations to eventually get changed pages written to VSAM data sets on disk), but the net effect is the same.

And there you have it. I'm thankful for your visiting this blog, and I'm thankful for interesting questions that come in when I'm trying to figure out what I'm going to blog about.

Friday, July 28, 2023

Db2 for z/OS: What I Would Say to Application Developers (Part 2)

In the part 1 of this 2-part entry (posted last month), I emphasized what I consider to be job one for a developer coding a Db2 for z/OS-targeted query (write a SQL statement that will retrieve the data your program requires, and don't worry too much about the statement's performance - that's mostly taken care of by Db2), while also noting ways in which a developer can effectively work in partnership with a DBA to enhance the performance of a Db2-based application. In this part 2 entry I will focus on leveraging application-enabling features of Db2 for z/OS.


First, what interface is right for a Db2-accessing application?

For a long time, this was not a very meaningful question, as there was essentially one application interface to Db2 for z/OS: the SQL interface. Sure, there could be discussions around dynamic versus static SQL (i.e., SQL statements prepared by Db2 for execution when initially issued by a program, versus statements that are pre-prepared for execution via a Db2 process known as "bind"), or about the use of "generic" (i.e., non-DBMS-specific) SQL forms such as JDBC and ODBC, but in any case you were talking about SQL, period. That changed with Db2 12 for z/OS, which introduced Db2's REST interface. Using Db2's REST interface does not involve compromising performance, security or scalability, so I'd say use it when it makes sense. When might it make sense? Here are some considerations:

  • Client-side programming language flexibility - The IBM Data Server Driver, which supports the use of SQL forms such as JDBC and ODBC with Db2 for z/OS, can be used with applications written in a number of programming languages (generally speaking, programs coded with embedded static SQL statements don't require a driver), but suppose your team wants to use a language for which the IBM Data Server Driver is not applicable? Well, does the language allow a program to issue a REST request? If the answer to that question is, "Yes" (often the case), programs written in that language can access Db2 via its REST interface.
  • Total abstraction of the particulars of the service-providing system - If you're using a generic SQL form such as JDBC or ODBC, you don't have to know the specifics of the relational database management system being accessed, but you still know that your program is accessing a relational DBMS (or something that through virtualization software appears to be a relational DBMS). Maybe you don't want to have to know that (even if you have strong SQL skills) - you just want to request some service and have it performed as expected by some system, and you don't care a whit about what that system is and how it does what it does. In that case, the REST interface to Db2 looks really good.
  • Separation of programming duties - When your client-side program accesses Db2 using REST requests, your program isn't issuing SQL statements - you're instead coding REST requests that invoke server-side SQL statements that were likely written by someone else. That separation of programming duties - client-side developers code programs that issue REST requests and process any returned results, and server-side developers write the Db2 SQL statements and/or stored procedures that will be REST-invoked - might suit you (and your organization's IT leadership) just fine.
So, think this over for a new application that will access Db2 for z/OS, and make the appropriate choice.


Let Db2 for z/OS do work so you don't have to

There are a number of Db2 features that can provide useful functionality for an application, and when you leverage one of these features that's functionality that you don't have to provide via program code. Here are some of the features in this category, delivered through recent versions of Db2 for z/OS:
  • Temporal data support (introduced with Db2 10 for z/OS) - This capability, through which a time dimension can be added to data in a table, comes in two forms (both can be implemented for a given table, if desired, or one or the other can be used):
    • System-time temporal (also known as row versioning) - The way this works: when a row in table T1 is deleted or updated, the "before" image of that row (the row as it existed prior to the update or delete operation) is automatically inserted by Db2 into the "history" table associated with T1, and Db2 also updates timestamp values in the history table row indicating when that "version" of the row first became the current version (i.e., when the row was first inserted into T1, or when it was updated to that version) and when the row stopped being the current version (i.e., when it was deleted from T1, or when it was updated to a new version). Here's what this means: using pretty simple Db2 temporal query syntax, your application can easily find out from Db2 what the row for a given entity (a bank account, an insurance policy, a customer record, whatever) looked like at a previous point in time. For example, when it comes to insurance claim adjudication, what's important is not what the policy holder's coverage is now - what's important is the coverage that was in effect when the event prompting the claim occurred. That's easy to determine with Db2 row versioning functionality. Also easy is seeing how a given entity's row in a table changed over a period of time, and who made changes to the row.
    • Business-time temporal - This form of Db2 temporal functionality lets you add future data changes to a table with an indication of when that change will go into effect and how long it will be in effect (if not indefinitely). For example, you could through a business-time temporal UPDATE statement indicate (and this will be reflected in the target table) that the price of product XYZ is going to change from $10 to $12 on May 1 of next year. Updates of this nature will not impact programs that, by default, are accessing rows that, from a business perspective, are currently in effect. Having future prices (for example) in a product table provides at least a couple of benefits: 1) it ensures that price changes will actually go into effect when scheduled, and 2) it allows financial analysts to issue queries that will show what revenue and profits will be with prices that will be in effect at a future date.
  • Db2 transparent archiving (introduced with Db2 11 for z/OS) - This feature can be helpful, especially for performance, in this scenario: table T1 has 20 years (for example) of historical transaction data (maybe related to account activity), but the vast majority of accesses to the table target rows that have been added in the past 90 days; further, because the table's row-clustering key is not continuously-ascending and because row inserts far outnumber row deletes, the "popular" rows in the table (the ones not more than 90 days old) are physically separated from each other by ever-larger numbers of "old and cold" rows (the ones rarely accessed by programs). In that case the performance for access to popular rows will get progressively worse, and the cost of administering the table (e.g., backing it up, periodically reorganizing it, etc.) will steadily increase. When Db2 transparent archiving is activated for the table T1 (easily done by a DBA), T1 will end up holding only the most recent 90 days of data (the popular rows), while all of the "old and cold" rows are physically stored in the "archive" table that - in an "under the covers" way - is associated with T1 (result: substantially better performance for access to the popular rows, because they are concentrated in a smaller table); and, for query purposes Db2 makes the base table and its associated archive table appear logically as a single table, so a query referencing only the base table can retrieve archived rows as needed; and, when a row is deleted from the base table (after it's been there for - in this example - 90 days), that row is automatically inserted by Db2 in the associated archive table.
  • Result set pagination (introduced with Db2 12 for z/OS) - Db2 12 made a new clause, OFFSET, available for a SELECT statement. OFFSET, used in combination with the FETCH FIRST n ROWS clause, makes it programmatically easier to return a multi-row result set in "pages" that a user can scroll through.
  • "Piece-wise" DELETE (introduced with Db2 12 for z/OS) - Starting with Db2 12, you can use the FETCH FIRST n ROWS clause with a DELETE statement, and that makes it really easy to write a program that will remove a large number of rows in a table in a series of small units of work (so that your data-purge program will not acquire too many locks at one time - nice for concurrency of access with other programs targeting the table).
  • Newer built-in functions - Db2 12 for z/OS added some nice built-in functions, including:
    • PERCENTILE_CONT and PERCENTILE_DISC - These functions (the former treats values in a column as points in a continuous distribution of values, while the latter treats column values as discrete data values) are useful for writing a query that answers a question such as, "Show me the value that is the 90th percentile with regard to salaries of employees in department A01."
    • LISTAGG - This function makes it easy to have a comma-separated list of values (e.g., last names of employees who have more than 10 years of service with the company, for a given department) as a column of a query result set.
    • HASH_MD5 - With this function, you can use a SQL statement to get an MD5 hash of a value before inserting that value into a table (and there are three related built-in functions associated with other popular hashing algorithms).
  • Application-specific lock timeout value (introduced with Db2 13 for z/OS) - Db2 13 provided the new CURRENT LOCK TIMEOUT special register, through which an application can set a lock timeout value that is different from the default lock timeout value for the Db2 system. Suppose, for example, that the system's default lock timeout value is 30 seconds. Maybe the development team for a mobile app that will drive accesses to a Db2 for z/OS database wants to make sure that a user will never have to look for 30 seconds at a spinning colored wheel on a mobile phone screen if a lock required by the app can't be readily obtained. The application team might decide (probably rightly) that it would be better to have a lock timeout value of 3 seconds for this app, have a condition handler in the Db2-accessing program for a lock timeout error, and in the event of a lock timeout (noticed in 3 seconds, versus 30 seconds) send an "Oops! Something went wrong - please try again" message to the user. I as a user would prefer that to looking at the spinning colored wheel for 30 seconds. Similarly, the development team for a long-running, mission-critical batch application might not want their job to time out unless a lock wait exceeds 10 minutes. Easily done with a SET CURRENT LOCK TIMEOUT statement.
  • SQL Data Insights (introduced with Db2 13 for z/OS) - This feature represents the first embedding of advanced machine learning technology in the Db2 for z/OS "engine." It's easy for a DBA to set up and easy for a developer (or a user utilizing a query tool) to use, because the data science work was done by IBM development teams. SQL Data Insights is made usable in the form of three new built-in Db2 functions (more will be delivered via future Db2 13 function levels): AI_SIMILARITY, AI_SEMANTIC_CLUSTER and AI_ANALOGY. These new functions allow for the asking of what I like to call "fuzzy" queries. Here's one example: suppose the fraud analysis team at an insurance company finally caught someone who had been submitting fraudulent claims (and had been very good at covering his tracks). Among the company's several million other policy holders, who else might be engaging in hard-to-detect fraudulent activity? Via the AI_SIMILARITY built-in function of Db2, you can (using standard query syntax for a built-in function) easily code a query that will do this: "Hey, Db2. Here is the ID of a policy holder. Show me the IDs of the 20 other policy holders who are most like this one." And here's the kicker: in coding that query, you don't have to tell Db2 what you mean by "like this one." Db2 will detect patterns of similarity in the data in the specified table - patterns that a human being might be challenged to discern - and return the rows with the highest "similarity scores" in relation to the policy holder ID provided as input to the function. You can turn that list over to the fraud detection team and say, "Hey, guys. Do a deep-dive analysis of activity for the policy holders associated with these 20 IDs - they are the ones most similar to the fraudster that we recently caught."
And there are more application-enabling Db2 features where these came from. Again, let Db2 do work that you might otherwise have to do programmatically. Not only will that save you time and effort - it's likely that Db2 will get the work done more CPU-efficiently than program code would.


Take advantage of Db2 global variable functionality

Db2 11 for z/OS introduced global variables. Unlike a traditional host variable, which you have to define in your program code, a Db2 global variable is created by a Db2 DBA. Once a global variable has been created, it can be used by an application process (a Db2 DBA just has to grant to the ID of the process the privilege to use the global variable). When an application references a Db2 global variable in a SQL statement, it gets its own instance of that global variable (in other words, if there is a Db2 global variable called GLOBVAR, and application process A puts 'cat' in the global variable and application process B puts 'dog' in the global variable, when the two processes look at the value of GLOBVAR then A will see 'cat' and B will see 'dog'). A Db2 global variable makes it really easy to get a value from a Db2 table and pass it via the global variable to a subsequent SQL statement (when the subsequent SQL statement references the global variable in, say, a query predicate, it is as though the predicate were referencing the value previously placed in the global variable).

Another Db2 global variable use case: a global variable makes it really easy to get a value from a Db2 advanced trigger (that's a trigger that has as part of its definition a SQL procedure language routine - more on SQL PL, below): the trigger just puts the value in a global variable, and when the trigger's processing is done and control is returned to the program that caused the trigger to fire, the program looks in the global variable and, voila - there's the value placed by the trigger.


Leverage SQL procedure language, and manage and deploy Db2 for z/OS SQL routines in an agile way

SQL procedure language (SQL PL) lets you code Db2 routines (stored procedures, user-defined functions and advanced triggers) using only SQL statements. This is do-able thanks to a class of Db2 SQL statements called control statements (a reference to logic flow control). These statements have names such as LOOP, ITERATE, WHILE and GOTO - you get the picture.

SQL PL has become really popular over the past 10-15 years (it was introduced with Db2 9 for z/OS). When it comes to Db2 data processing routines, those written in SQL PL can have both functional and performance advantages over routines written in other languages (autonomous procedures are just one example of a functional benefit of stored procedures written in SQL PL). If you decide to use SQL PL routines for an application, I encourage you to manage and deploy these routines in an agile way. In terms of SQL PL routine management, consider how associated source code will be managed. The source code of a Db2 for z/OS "native SQL procedure" (i.e., a stored procedure written in SQL PL) is the CREATE PROCEDURE statement that defines the stored procedure. How should you manage this source code? The same way you'd manage any other source code - don't get thrown off by the fact that this source code has a CREATE in it. Does your organization use an open-source source code management (SCM) tool such as GitLab? OK, fine: use GitLab to manage the source for your Db2 native SQL procedures - you wouldn't be the first to do that.

How about deployment of Db2 SQL PL routines - especially native SQL procedures? To do that in the most agile way possible, use CREATE OR REPLACE syntax when coding these routines. This is the best fit for a unified DevOps pipeline (i.e., an application deployment pipeline used for all of your organization's applications, regardless of the platform(s) on which application programs will run).


The bottom line

In the minds of the IBM folks who develop Db2 for z/OS, application developers are a tremendously important constituency. There are all kinds of Db2 features and functions that were expressly designed to make life easier for application developers working on programs that will access Db2 data servers. Learn about this stuff and take advantage of it. And work with your organization's Db2 for z/OS DBAs. They can help you leverage Db2's application-enabling capabilities.

Code on!

Friday, June 30, 2023

Db2 for z/OS: What I Would Say to Application Developers (Part 1)

Not long ago, I received a request to deliver a couple of Db2 for z/OS-focused webcasts for an organization's application developers. The person who asked about this initially gave me the impression that the purpose of the webcasts would be to help developers write "efficient SQL." This did not have much appeal for me (as I'll explain below), and I communicated as much back to the requester. Subsequently, this individual rephrased the request, indicating that the aim of the webcasts would be to provide "insights for developers to increase their confidence and skills around Db2 [for z/OS] in both development and problem analysis." "OK," I thought to myself, "Now we're talking." This ask gave me an opportunity to think about what I'd like to say to people who write (or might write in the future) application programs that involve accessing Db2 for z/OS-managed data. I'll share these thoughts of mine in a two-part blog entry. In this first part I'll talk about application performance - but maybe not in the way you'd expect. In the part two entry, which I'll likely post in the next 2-3 weeks, I'll focus on application enablement from a Db2 for z/OS perspective.

OK, why will a request to talk to developers about "writing efficient SQL" generally rub me the wrong way? Two reasons: first, something I heard a few years ago. I was at a big Db2-related conference, sitting in the audience for a session delivered by the person who was at that time the leader of the optimizer team in the IBM Db2 for z/OS development organization (Db2's optimizer parses a query and generates for that query the access plan that it estimates will produce the requested result set at the lowest cost and with the best performance). The presenter said (in words to this effect, and with emphasis added by me), "As the leader of the Db2 for z/OS optimizer team, my message for application developers is this: job one for you is to write the query that will retrieve the data that your program needs. If that query could be written differently so as to retrieve the same result faster, we'll take care of that." What he was specifically referring to is the Db2 optimizer's ability to re-write a query under the covers so that the same result will be generated faster (more on that re-write capability momentarily). That statement by the optimizer team leader made a huge impression on me, and I think his words were absolutely spot-on.

I feel that it's very important for an application developer, when writing SQL targeting a Db2 for z/OS database, to focus on a query's objective, versus its form. Why? For one thing, job one really is to get the right data. If a query returns incorrect or incomplete data to a program, who cares if the query runs quickly? A bad result that is returned in a short time is still a bad result. Nothing is more important than retrieving the data that a program requires. Secondly, I believe it's very important for a developer writing Db2-targeting SQL to not have to think about the fact that the target DBMS is Db2 for z/OS. All you as an application developer should really have to think about is that the target DBMS is relational in nature. If you have to stop and think, "Oh, let's see - the data I'm going after is in a Db2 for z/OS database. That means I have to do X, Y and Z in order to get good performance," that's going to negatively impact your productivity, assuming that you're also called on to write SQL that targets other relational DBMSs. As far as I'm concerned, when Db2 for z/OS is the target DBMS you should just think, "relational DBMS," and go from there.

Here's another reason that a request to "tell developers how to write efficient Db2 for z/OS SQL" raises my hackles: too many Db2 for z/OS DBAs, in my opinion, just assume that the average application developer writes inefficient SQL. It's kind of like complaining about the food at college just because it's college food, regardless of whether or not it's actually tasty. That's not a helpful attitude. I've advised Db2 for z/OS DBAs that they should think of themselves as partners with developers when it comes to getting new applications and new application functionality into production. Similarly, I would advise developers to be partners with Db2 for z/OS DBAs when it comes to analyzing and addressing performance issues related to Db2 for z/OS-accessing applications.

How can a developer be a partner when it comes to taking action to resolve performance issues related to Db2 for z/OS-targeted queries? Some thoughts on that matter:

  • Learn some of the lingo. Sometimes, Db2 for z/OS DBAs will say things like, "This SQL statement isn't performing well because it has this stage 2 predicate?" Huh? OK, here's what that means: predicates (the result set row-qualifying parts of a query, such as WHERE ACCOUNT_NUM = 1234) in Db2 for z/OS SQL statements can be either stage 1 or stage 2 in nature. These terms refer, respectively, to two components of Db2 for z/OS: the data manager (stage 1) and the relational data system (stage 2). A stage 1 predicate can be evaluated by the Db2 data manager, while a stage 2 predicate has to be processed by the Db2 relational data system. Stage 2 predicates generally require more CPU time for processing than stage 1 predicates; furthermore, stage 1 predicates are usually index-able, while stage 2 predicates are almost never index-able. That index-able versus non-index-able characteristic of a query might result in a requirement for a table space scan when the query is processed, and that could really slow down query execution, especially when the table in question is really large. On the other hand, a stage 2 predicate in a query might not be a big deal if the query has another predicate or predicates that are highly filtering (i.e., that are evaluated as "true" for only a small number of a table's rows) and index-able.

If you're interested, you can read about stage 1 and stage 2 and index-able and non-index-able predicates on this page of the Db2 for z/OS online documentation. Do you need to sweat a lot about stage 1 versus stage 2 predicates when writing SQL statements for an application? I'd say, not necessarily. Remember that job one is to write a query that returns the data that your program needs. On top of that, Db2 for z/OS, especially over the most recent several versions, has gained more and more query re-write capabilities (as I mentioned previously). Suppose, for example, that you need to get from a Db2 for z/OS table all customers whose accounts were opened in 2010, and you write a predicate like this one to get those rows: 

WHERE YEAR(DATE_OPENED) = 2010

That predicate is stage 2 and non-index-able; however, Db2 for z/OS, in preparing your query for execution, can automatically re-write that predicate in this form, which is stage 1 and index-able:

WHERE DATE_OPENED BETWEEN '2010-01-01' AND '2010-12-31'

  • Know something about what's possible for improving a query's performance. If a query processed by Db2 for z/OS is not performing as it needs to, re-writing the query in some way is one possible solution, but there may be other performance-boosting actions that could be taken instead. One possible solution could be a Db2 for z/OS index created on an expression - something do-able since Db2 Version 9 (as of the date of this blog post, the current Db2 for z/OS version is 13). Suppose, for example, that your program needs rows selected from a Db2 for z/OS table based on an upper-case comparison of values in column COL1 with a provided character string constant. Your query might have a predicate that looks like this:

WHERE UPPER(COL1, 'EN_US') = 'ABCDE'

That predicate is stage 2 and non-index-able; however, it will become index-able if an index is defined on that expression, as shown below (assume that COL1 is a column of table T1):

CREATE INDEX UPPER_VAL ON T1
(UPPER(COL1, 'EN_US'))

Another possible query tuning action is to provide the Db2 optimizer with the catalog statistics that it needs to choose a better-performing access plan for the query. In fact, colleagues of mine who are part of the team in IBM Support that works on cases (i.e., problem tickets) related to Db2 for z/OS query performance have said that the large majority of query performance issues on which they work are ultimately resolved in this manner. Here's the deal: the primary input to Db2 for z/OS access path selection is statistics about objects related to the query - e.g., tables, and indexes on those tables, and columns in tables - that are periodically collected (often by a Db2 for z/OS utility called RUNSTATS) and stored in the Db2 catalog. The richer and more complete the catalog statistics are, the better the Db2 optimizer can do in generating a well-performing access plan for a query. The tricky part is this: what statistics should be gathered for tables, columns, indexes, etc. to enable the optimizer to choose a well-performing access path for a particular query? Would histogram statistics on a given column help? How about frequent-value percentage information for a column? How about correlation statistics for a certain pair of table columns? Telling RUNSTATS to generate every possible statistic on everything would make that utility too costly to execute, so the utility is often executed with a specification that generates what you might call a good "base" of statistics (TABLE(ALL) INDEX(ALL) is typically the specification used for this purpose). How do you know when the optimizer needs additional statistics - and which additional statistics - in order to generate a well-performing access plan for a query that is currently performing poorly? Fortunately, starting with Db2 12 for z/OS the optimizer answers this question for you in the form of statistics profiles, as described in an entry I posted to this blog a few years ago. I'd say, if a query you wrote is not performing as it needs to then before trying to re-write the query or asking a DBA to add or alter an index to address the situation, see if Db2 has inserted a statistics profile in the SYSTABLES_PROFILES catalog table for one or more of the tables accessed by your query (as described in the aforementioned blog entry). If there is such a statistics profile or profiles, work with a DBA to get RUNSTATS executed using the profile(s) and then let Db2 re-optimize the query using the statistics added to the catalog by that RUNSTATS job (for a so-called static query, re-optimization would be accomplished via a REBIND of the associated Db2 package; for a query that is dynamic in the Db2 sense of that word, re-optimization is triggered by invalidating the previous prepared form of the query in Db2's dynamic statement cache). In plenty of cases this will resolve a query performance issue.

  • Know something about EXPLAIN. EXPLAIN is a Db2 SQL statement (also an option of the BIND and REBIND PACKAGE commands, for static SQL statements) through which you can get information about the access path selected by the optimizer for a query (you can read about the EXPLAIN statement in the online Db2 for z/OS documentation). EXPLAIN-generated access path information, in its traditional form, is written to the EXPLAIN tables (these are Db2 tables, as you might expect). The most important of these tables is the one called PLAN_TABLE. Information in this table shows, among other things, the order in which the parts of a query are executed (for example, the order of table access when a statement involves a multi-table join), how data in a table is accessed (e.g., through an index and, if so, which index), the number of columns in an index key that are a match for columns referenced in a query predicate (MATCHCOLS - a higher number is generally a good thing), and the type of join method used when tables are joined (e.g., nested loop or merge scan). If you know something about the information in PLAN_TABLE, you'll be better equipped to partner with a Db2 for z/OS DBA to see how execution of a query that is not performing as desired could potentially be sped up.
A lot of veteran Db2 for z/OS DBAs are very familiar with EXPLAIN information in its PLAN_TABLE form. If you're interested in viewing EXPLAIN information in a different form, consider using the Visual Explain feature of the IBM Db2 for z/OS Developer Extension for Visual Studio Code (a no-charge IBM software tool designed to facilitate development of applications that access Db2 for z/OS data). Visual Explain (as the feature's name implies) provides a visual representation of the access path selected by the Db2 optimizer for a query; and, it's not just pretty pictures - hovering over or clicking on a part of the displayed access path provides very useful related information. Among other things, you can see the number of result set rows that Db2 thinks there will be following execution of that part of the access plan. For a query that is not performing as desired, you might see such information and think, "That's not right. After accessing that table the result set should have way more (or way fewer) rows than indicated by this estimate." The implication here is that you know something about the data that Db2 doesn't know (thus the Db2 optimizer's off-the-mark estimate concerning refinement of the result set as the query's access plan is processed). That, in turn, could suggest that catalog statistics might need to be augmented to provide Db2 with a clearer view of the characteristics of data in a target table (as mentioned in the reference, above, to Db2's SYSTABLES_PROFILES catalog table); or, that might prompt you to consider adding or modifying a query predicate to provide Db2 with a different form of the data request - one that might generate the required result set more quickly.

The bottom line is this: while your primary focus in writing Db2 for z/OS SQL statements should be on retrieving the data your program requires (and I'm focusing on queries because INSERT, UPDATE and DELETE operations are usually more straightforward from a performance perspective), there could be a situation in which a query you coded needs to execute with greater speed and efficiency. Query performance tuning is something to which many Db2 for z/OS DBAs are accustomed, but success in that endeavor can be accelerated and made more likely when developers and DBAs work on this as a team. Don't worry about knowing as much about Db2 for z/OS as your DBA teammate - that's not your job; but, realize that your understanding of your application's data requirements, and of the data the application is accessing in Db2, can be a big help when it comes to tuning a query's performance. It's definitely a case in which 1 (your specialized knowledge as a developer) plus 1 (the DBA's specialized knowledge of Db2 for z/OS) is greater than 2.

In the part 2 of this blog entry I'll have some things to say about application enablement in a Db2 for z/OS context - that is, about ways that you can leverage Db2 functionality to get more feature-rich applications developed more quickly.