Saturday, January 31, 2015

DB2 for z/OS: Native SQL Procedures and STAY RESIDENT

A couple of days ago, I received from a DB2 for z/OS professional a question about stored procedures. This individual asked specifically about the STAY RESIDENT option of CREATE PROCEDURE (and ALTER PROCEDURE), and its effect on a native SQL procedure (i.e., a stored procedure that is written in SQL procedure language -- aka SQL PL -- and which runs in the DB2 database services address space). Less than an hour after the e-mail containing the query hit my in-box, another message from the same DB2 person arrived. The lead word in this second note was "Disregard," and that was followed by explanation of the discovery that STAY RESIDENT is not applicable to a native SQL procedure.

So, this question-asker had quickly found that STAY RESIDENT is only relevant to an external DB2 stored procedure (referring to a stored procedure that is written in a language such as COBOL and which runs in a DB2 stored procedure address space); still, I think that the question remains interesting because it leads to another: if STAY RESIDENT YES or NO cannot be specified on a CREATE or ALTER PROCEDURE statement tied to a native SQL procedure, what happens to the executable after the native SQL procedure has been called and then runs to completion?

In considering this question, consider the nature of the executable. For an external stored procedure, the associated executable is a load module. When the external stored procedure is first invoked, its load module will be copied from a load library on disk into a stored procedure address space in the z/OS LPAR in which the DB2 subsystem is running. After the stored procedure has run to completion, its load module will remain in memory in the stored procedure address space if the stored procedure was created with (or subsequently altered to have) the STAY RESIDENT YES specification; otherwise, the load module will be deleted from memory after the stored procedure has finished executing. For an external stored procedure, therefore, STAY RESIDENT YES is a performance tuning option that can be appropriately utilized for external stored procedure programs that are compiled and linked as reentrant and reusable (if a stored procedure program is not reentrant and reusable and is defined -- properly, in that case -- with STAY RESIDENT NO, module load time can be reduced by loading from the z/OS Virtual Lookaside Facility, aka VLF).

If we're talking about a native SQL procedure, that stored procedure's one and only executable is its DB2 package. To put it a slightly different way, for a native SQL procedure the package is the executable (as opposed to the external stored procedure situation, in which the package is only part of the executable story -- the other part being the stored procedure's load module). What can we say about the "stay resident" characteristic of a native SQL procedure's executable (its package)? We can say about that what we'd say for ANY DB2 package: it depends on the size of the DB2 skeleton pool, the nature of the thread through which the native SQL procedure is invoked, and the RELEASE specification of the SQL procedure's package. Hereinafter I'll expand on that statement.


When a native SQL procedure is first called (say, following the most recent START of a DB2 subsystem), its package will be loaded from the DB2 directory table space called SPT01 (often referred to as the skeleton package table) into a part of the DB2 EDM pool called the skeleton pool. The EDM pool is in the DB2 database services address space (aka DBM1). The size of the skeleton pool within the EDM pool is determined by the value of the  EDM_SKELETON_POOL parameter in the DB2 subsystem's ZPARM module.

After the package has been loaded into the skeleton pool, it is copied from there into another part of the DBM1 address space, called a local agent pool, that is associated with the thread through which the native SQL procedure's CALL got to DB2. That memory-to-memory copy operation is part of the process of allocating the package to the thread. 

When the native SQL procedure is done executing, the copy of the procedure's package in the thread's agent local pool in DBM1 will remain there, allocated to the thread, if the package was bound with RELEASE(DEALLOCATE) and if the thread persists through the commit that marks the end of a DB2 unit of work. Some thread types persist through commits, and some do not. Thread types that are persistent include CICS-DB2 protected entry thread, threads belonging to IMS pseudo-WFI regions, high-performance DBATs, and batch job threads (a DB2-accessing batch program could issue many commits, but its thread will persist until the job has finished executing).

If the native SQL procedure's package was bound (or rebound) with RELEASE(COMMIT), or if the thread through which the CALL came is non-persistent, the copy of the package in the agent local pool will be deleted. Does that mean that the package is no longer resident in memory? No. The "master" copy of the package remains in the skeleton pool section of the DB2 EDM pool, and it will again be copied from there to a thread's agent local pool as needed for execution via a thread.

So, in essence a native SQL procedure's executable will "stay resident" in memory by default. For optimal performance and CPU efficiency, the package can be bound with RELEASE(DEALLOCATE) and executed by way of a persistent thread -- that keeps the procedure's package not only in memory, but in the thread's own agent local pool.

How long will a native SQL procedure's package remain in the skeleton pool portion of the EDM pool, once it's been loaded there from the skeleton package table in the DB2 directory? A long time, if the skeleton pool is adequately sized. See, once a package is in the skeleton pool it will stay there (assuming that it's not invalidated through, for example, an ALTER of an object on which the package depends) until the DB2 subsystem is stopped (as it might be, on occasion, for application of software maintenance) or until DB2 steals the space in the skeleton pool occupied by the package in order to make room for another package that has to be brought in from SPT01. That space steal will only happen, of course, if the skeleton pool becomes full, and even then only space occupied by inactive packages is subject to stealing. Your DB2 monitor, via a statistics long report or an online display of EDM pool activity, can provide you with information that can help you to determine whether or not the skeleton pool is appropriately sized. In particular, check to make sure that you aren't getting any program failures caused by a full skeleton pool (such failures would indicate that the pool is undersized, and probably significantly so). Also check the number of free pages in the pool (I like to see a number of free page that is at least 10% of the total number of pages in the skeleton pool), and the skeleton pool hit ratio (the percentage of package allocation requests that were satisfied out of the skeleton pool, as opposed to requiring a package load from the skeleton package table in the DB2 directory -- I like to see a hit ratio of 99% or more for a production DB2 subsystem).

There you have it. As mentioned, what's true for a native SQL procedure's package, memory-residency-wise, is true for packages in general. What's different about a native SQL procedure is the fact that there is no executable beyond the package; so, if that package stays resident in memory, the native SQL procedure stays resident in memory.

Monday, January 26, 2015

A Java Program, a COBOL Program, and a DB2 Update - An Interesting Case Study

Not long ago, an IBM colleague of mine who is a WebSphere Application Server (WAS) for z/OS specialist sent to me and to a few other IBMers (including some of our CICS and DB2 for z/OS developers) some information about an application scenario, and asked for our ideas and feedback. The scenario involved a Java program that accessed a local DB2 for z/OS subsystem (i.e., a subsystem in the same z/OS LPAR as the WAS instance) and selected a row to be updated, and then invoked a COBOL program (also in the same z/OS LPAR) that performed the actual row update operation. The IBM WAS for z/OS specialist had some particular questions about DB2 locking in this situation, and was looking for an approach that would be favorable from a concurrency perspective. Everyone chimed in with their thoughts on the matter, and in this blog entry I'll synthesize and summarize that information. I think it makes for an interesting case study.

The client with which my WAS for z/OS specialist colleague (named Edward) was working had an application requirement whereby, as I mentioned, a Java program was to select a DB2 row for update and then was to invoke a COBOL program which would perform the prescribed update operation. The COBOL program existed already in the form of a CICS transaction, and as CICS has advanced capabilities regarding interaction with Java applications, Edward wanted to assess the options available for invoking the COBOL program in its present, CICS transactional form. Edward's chief concerns were locking and data integrity (locking, of course, is a mechanism for preserving data integrity). Edward was thinking that he wanted the Java program to acquire and retain a lock on the DB2 row to be updated, so as to prevent some other program from altering the record between its selection by the Java program and its subsequent update via the Java-invoked CICS transaction; he was concerned, however, that the Java-acquired lock on the row would end up blocking the update action of the called CICS transaction program.

The impact of the concurrency issue raised by Edward hinged on whether or not the selecting Java program and the updating CICS program would be seen by DB2 for z/OS as being associated with the same "global" transaction: if "no" then a lock (row- or page-level, depending on the target table space's LOCKSIZE attribute) acquired by the Java program would indeed block the update attempt by the invoked CICS transaction; if "yes" then a Java-acquired lock would be owned by the global transaction and would therefore be inherited by the CICS program. In that latter case, then, even an X-lock acquired on the row (or the row's page) by the Java program would not impede the CICS program's subsequent update operation. [Sometimes, the choice is indeed to X-lock a selected row, to prevent it form even being read by another program prior to its "downstream" modification by a called process -- such a "hands-off" lock is sometimes acquired by way of an update statement that sets the value of a column in a row equal to its current value, thereby X-locking the row without actually changing its contents.]

To this question our CICS and DB2 for z/OS developers responded with some enlightening information. First, if the row-selecting Java program were to run in WAS for z/OS and then link into CICS to invoke the row-updating program, the two processes (row-select and row-update) would be seen by DB2 as two different units of work, and the result would be a lock conflict. This conflict could be avoided by having the Java program execute in the CICS Liberty environment (the Liberty Profile is a lightweight Java application development and runtime environment, one form of which is available within CICS); however, CICS Liberty itself is only one part of the solution -- there is also the matter of the JDBC driver used to access DB2 for z/OS from the Java program. If the type 4 JDBC driver is utilized, the global nature of the overall transaction will not be recognized by DB2, and the lock conflict challenge will remain. If the type 2 JDBC driver is used (and that is an option when the target DB2 subsystem is in the same z/OS LPAR as the CICS region), and if the link to the COBOL program is local (i.e., not a distributed program link to a program in another CICS region), access to DB2 from the Java program running in the CICS Liberty environment and from the COBOL program will both be associated with the same DB2 thread; therefore, any lock acquired on DB2 data by way of the row-selecting Java program will belong as well to the row-updating COBOL program.

So, a CICS-based solution satisfying the application requirement was definitely available to the organization with which Andrew was working. As an alternative approach -- a plan B, if you will -- I suggested to Andrew the following: have the Java program invoke the row-updating COBOL program as a DB2 stored procedure. In that case, the same DB2 thread should be used for the Java-based SELECT and the COBOL-based UPDATE. Going the stored procedure route could in some cases be quite straightforward: generally speaking, if a COBOL program that executes in a CICS region issues SQL statements but no CICS commands, it can be invoked in the form of a stored procedure with little, if any, modification.

Yet another approach would not depend on the Java and COBOL programs sharing locks. Suppose that the Java program did NOT acquire and retain a lock on the target DB2 row prior to invoking the updating COBOL program (assuming that the typical cursor stability isolation level is in effect, an S lock acquired on a target row will be released following completion of a SELECT statement). If a lock on the target row is not acquired and retained by the Java program, how can one avoid the situation in which some other program comes in and changes the row before the Java-invoked COBOL program gets to it, potentially leading to an undesired "double update" of the target row? The answer, in a nutshell, is that you don't avoid that situation. Instead, you expect that it won't happen, and you detect it if it does. This is what so-called optimistic locking is about.

Optimistic locking was introduced with DB2 9 for z/OS, and there is a good description of this feature in section 7.3 of the IBM "redbook" titled DB2 9 for z/OS Technical Overview. Basically, it works like this: you alter the target table to add what is called a row change timestamp. That timestamp value is maintained by DB2 and it does what you'd expect: it records the time at which a row is changed. With that done (and continuing with the application requirement of which I've written), the Java program would select a row (without retaining a lock on the data) and then invoke the COBOL program to update the row. In doing that, the Java program would pass to the COBOL program the value of the row's last-changed timestamp. The COBOL program would then issue an UPDATE that would search for the target row using the appropriate qualifying criteria (e.g., an account number) AND the value of the row-change timestamp retrieved by the Java program's SELECT. If the target row has already been changed by an intervening statement (i.e., a data-changing SQL statement that gets to the row between the Java SELECT and the COBOL UPDATE), the row-change timestamp value will have changed (again, DB2 maintains that value), and the COBOL UPDATE won't find the target row -- that's how you avoid the aforementioned "double-update" scenario. If you want, you could have the COBOL program in that situation select the target row using its unique key but NOT with a predicate referencing the row-change timestamp -- that would show you that the row is there (assuming the intervening data-change statement wasn't a DELETE), but that it was updated between the Java SELECT and the COBOL UPDATE. The extra SELECT to check on this occurrence might strike you as being a little expensive, but keep in mind that it's something your program will rarely have to do -- the term "optimistic locking" suggests an expectation that the target row will NOT be changed between the Java SELECT and the COBOL UPDATE. Something that's a little expensive but seldom done is usually not a big deal.

There you have it: at least three ways to address the stated application requirement:
  • Use the CICS Liberty environment for the Java program, and the type 2 JDBC driver and a local link to the COBOL program in the form of a CICS transaction, or
  • Invoke the COBOL program as a DB2 stored procedure, or
  • Use the optimistic locking approach.

The first two of these options involve locking the target row via the selecting Java program (as mentioned, that could be done by updating the row to set a column equal to itself), and then invoking the updating COBOL program which will not be blocked by the Java-acquired lock because it will share ownership of that lock. The third approach does not depend on such shared lock ownership.

Faced with a similar application requirement, the approach you should take is the one that best suits your needs and your environment. Always nice to have options, I say.