Monday, September 30, 2019

Db2 12 for z/OS - Let's Talk About MAXDBAT in ZPARM

"ZPARMs" is a reference to the parameters in the Db2 for z/OS module called DSNZPARM - these are essentially the configuration parameters for a Db2 subsystem. Two of the ZPARMs that are closely related to each other are CONDBAT and MAXDBAT. CONDBAT specifies the maximum number of connections that network-attached applications (i.e., applications that access the Db2 system via the Db2 distributed data facility, aka DDF) can have with the Db2 system at any one time. MAXDBAT is, essentially, the number of those application connections that can be "in-use" at one time. ["In-use" has to do with the processing of transactions that are originated by the connected applications - for such a transaction to be processed by Db2, the associated connection has to be paired with a type of Db2 "thread" called a database access thread, or DBAT (think of a DBAT as a DDF thread).]

As a rule, you don't want a Db2 system's CONDBAT limit to be reached. Why? Because when that happens, the next connection request from a DDF-using application will fail with an error code. How can you tell if a Db2 system's CONDBAT limit has been reached (aside from noting that an application encountered a connection failure)? You can see that in a statistics detail report generated by your Db2 monitor (depending on the monitor, that might be called a statistics long report). In that report, you'd see a section with DDF-related information, and a field that would be labeled as the blue-highlighted line below (or in a similar way):

GLOBAL DDF ACTIVITY          QUANTITY
---------------------------  --------
DBAT/CONN QUEUED-MAX ACTIVE      0.00
CONN REJECTED-MAX CONNECTED      0.00

If that blue-highlighted number is non-zero, the CONDBAT limit for the Db2 system was hit during the time interval captured in the report. If I saw that CONDBAT was hit on my Db2 system, I'd increase the CONDBAT value. I wouldn't make the CONDBAT value way higher that it needs to be (it can be as high as 150,000 for a Db2 subsystem), but neither would I be stingy in this department (an application connection not in an "in-use" state is placed in an inactive status by Db2, and an inactive connection has a very small virtual storage footprint, and snapping it back to active status when needed is a very low-overhead operation).

Simple enough, but what about MAXDBAT? Do you also want that limit to not be reached? The answer to this question is a little less straightforward versus the CONDBAT situation. I'd say that in most cases you'd want the MAXDBAT value to be high enough so as not to be reached, but that's not necessarily so in all cases. First, how can you see that MAXDBAT has or has not been reached for a Db2 system? One way would be to check the aforementioned statistics detail (or statistics long) report generated by your Db2 monitor - the field highlighted in purple below is the relevant one:

GLOBAL DDF ACTIVITY          QUANTITY
---------------------------  --------
DBAT/CONN QUEUED-MAX ACTIVE      0.00
CONN REJECTED-MAX CONNECTED      0.00

A non-zero value in the purple-highlighted field indicates that the MAXDBAT limit was reached for the Db2 subsystem during the time interval captured in the report. Another way to check on this would be to issue the Db2 command -DISPLAY DDF DETAIL. In the output of that command you'd see this field and an accompanying value:

QUEDBAT=      0

The QUEDBAT value indicates the cumulative number of times that the MAXDBAT limit was hit for the Db2 subsystem since DDF was last started (which probably would be when the Db2 subsystem was last "bounced," or stopped and restarted).

What happens when MAXDBAT is hit? In that case, the request for a DBAT (required in order to service a transaction request coming via an application connection) is queued. No error, at least not right away, but if the request waits too long then the application server might time the transaction out, and we likely don't want that; so, you don't want MAXDBAT to be hit, right? If it is hit, you want to increase the MAXDBAT value, right? Probably right, but not always. Here's one of the "not always" scenarios: suppose you have a DDF workload characterized by the occasional surge of transaction volume (such as might occur during a certain part of a month). You could make the MAXDBAT value large enough to accommodate that surge, and that would be OK if the Db2 system's processing capacity is sufficient to effectively process the surge of transactions. What if that is not the case? What if the surge of transactions, if allowed to flow right into the Db2 system, would max the system out, taking the utilization of the Z server's general-purpose "engines" (processors) to something close to 100%? The z/OS system won't fail (z/OS is famous for staying up in extreme processing situations), but work could get really backed up, so much so that response times could soar, leading to performance complaints from application users (and maybe to monetary penalties if a service level agreement is violated).

If you have a situation in which a DDF transaction surge overwhelm's a server's processing capacity, you might be better off with a MAXDBAT value that induces some transaction queueing during surges. If, for example, the number of concurrently-executing DDF transactions on a system is almost always below 2000, and surges above that have severely impacted response times, a MAXDBAT value of 2000 could be beneficial. Yes, if a surge comes along and MAXDBAT is hit then transactions will start queueing up, waiting for a DBAT to come free (when an in-process transaction completes), but the system, shielded from the negative impact of transaction overload, will continue to process work quickly and efficiently. That, in turn, will cause in-use DBATs to free up quickly, and that could mean that the time a DBAT-awaiting transaction spends in the queue will be very small. While there would be some response time elongation due to some transactions having to queue up for a DBAT, the performance impact could be reduced versus the "let 'em all in when they arrive" situation.

Bottom line: if a Z server has the processing capacity to efficiently handle "peak of the peak" DDF transaction volumes, make the MAXDBAT value high enough to avoid transaction wait-for-DBAT queueing. If, on the other hand, the occasional really-big DDF transaction surge causes the system to get severely bogged down so that transaction service times shoot way beyond the acceptable level, you could actually improve performance and throughput via a MAXDBAT value that maintains efficient processing by forcing a degree of transaction queueing. Note, too, that if you want to induce surge-time queuing only for transactions associated with a particular DDF-using application or applications, you can accomplish that via the Db2 profile tables (SYSIBM.DSN_PROFILE_TABLE and SYSIBM.DSN_PROFILE_ATTRIBUTES), which enable the setting up of DBAT limits (and/or connection limits and/or idle thread timeout values) in a granular fashion (as described in an entry I posted to this blog a couple of years ago).

I hope that this information will be helpful for you.

Thursday, August 29, 2019

The Two Aspects of Db2 for z/OS Support for JSON Data

JSON (JavaScript Object Notation) is a form of data representation that is popular with people who develop what you might call "very modern" applications, including those that are cloud-based and those that have a mobile front-end. One reason this is so: a JSON document is easy for a program (or a person) to parse, as you'll see from examples I've included a bit further down in this blog entry.

As interest in JSON has grown, more folks have asked about the support that Db2 for z/OS provides for data represented in this form. I'm going to tell you about that support, starting with the fact that it comes in two flavors (which can be used separately or together). First, there is the JSON support that is associated with Db2's REST interface. The other flavor of Db2 REST support has to do with storing JSON documents, as such, in a column of a Db2 table. I'll cover these two REST-related Db2 capabilities in the order of my having mentioned them.


JSON and the REST interface to Db2 for z/OS

Through Db2's REST interface, you can enable execution of a static SQL statement via a REST request (for those not familiar with Db2 "static" SQL: the term refers to a statement that has been pre-coded and pre-prepared for execution - it's basically the compiled, executable form of a SQL statement). That SQL statement could be a SELECT, INSERT, UPDATE, DELETE or TRUNCATE, or a CALL to a stored procedure (the called stored procedure could then issue any number of different SQL statements). Let's consider a situation in which a SELECT that retrieves information about a bank account has been REST-enabled (sensitive information, yes, but Db2's REST interface utilizes connection protection and authentication services provided by the RACF z/OS security manager, along with Db2's own user authorization functionality). If this SELECT-based service is named getAcctInfo, the associated REST request issued by a client program could look like this:

POST http://mybank.com:4711/services/BANKING/getAcctInfo/v1

And in the body of the request you might see this input:

{
 “AcctID”: 123456789
}

That input is in the form of a JSON document - a very small JSON document, comprised of just a single name-value pair. Without being a rocket scientist, you can look at that JSON document and figure out that 123456789 is someone's account ID. That ease-of-interpretation is one of the appealing characteristics of JSON versus, say, XML. XML tags are sometimes kind of obtuse, owing in part to the fact that there often has to be broad agreement as to the meaning of a given tag (something that makes XML a very robust form of representation for data that might be exchanged between different companies, such as an automobile manufacturer and a parts supplier). With regard to the data payload of a REST request in JSON form, there just has to be agreement on meaning between client and server, and that flexibility enables use of meaningful names in name-value pairs.

OK, so that REST request, with its input in JSON form, comes into a Db2 for z/OS system (the system associated with mybank.com:4711 in the HTTP request). Assuming that the request's credentials check out (that would be ID and password, or ID and certificate), and the ID has been granted EXECUTE authority for the Db2 package that is the executable form of the REST-enabled SQL statement, the provided input will get plugged into the SELECT and the statement will be executed. Let's say the SELECT - with the provided input added - looks like this:

SELECT C.FIRSTNAME, C.LASTNAME, A.BALANCE, A.LIMIT
FROM ACCOUNTS A, CUSTOMERS C
WHERE A.ID=123456789

AND A.CUSTNO = C.CUSTNO

Suppose that the query's output looks like this, in traditional SQL result set form:

FIRSTNAME    LASTNAME    BALANCE    LIMIT
---------    --------    -------    -------
John         Smith       1982.42    3000.00

Db2 will take that result set and send it back to the REST client in the form of a JSON document, as seen below (if the REST-enabled query had a multi-row result set, you'd see multiple sets of associated name-value pairs, one after another in the JSON output document):

{
  “FIRSTNAME” : “John”,
  “LASTNAME”  : “Smith”,
  “BALANCE”   : 1982.42,
  “LIMIT”     : 3000.00
}

That, then, is the one aspect of Db2 for z/OS support for data in JSON form: as a provider of data services invoked via REST requests, Db2 accepts input in JSON form and sends output to a requester in JSON form. Now for a look at the other aspect of Db2 for z/OS support for JSON data.


Storing JSON documents in Db2 tables

Suppose that a client application sends data in JSON form to a z/OS server, and there is a desire to store those JSON documents as values in a column of a Db2 table. Db2 provides support for that usage scenario by way of some user-defined functions (UDFs) supplied via a no-charge software offering called the IBM Db2 Accessories Suite for z/OS (Version 4.1 of which is the one that goes with Db2 12 for z/OS - the product number is 5697-Q05). The most important of these UDFs are JSON2BSON, BSON2JSON and JSON_VAL. Here is what they do:

  • JSON2BSON - Converts a JSON document to BSON (a standard binary form for JSON data), so that it can be stored in a Db2 table column defined with the BLOB data type.
  • BSON2JSON - This UDF can be used to retrieve a JSON document in its entirety from a Db2 BLOB column in which the document is stored in BSON form. Output of the function is the text form of the retrieved JSON document.
  • JSON_VAL - This function can be used to retrieve - or to refer to - an individual data item (e.g., LASTNAME) in a JSON document. In addition to being usable in a query's select-list, JSON_VAL can be referenced in a predicate of a SELECT, INSERT or DELETE (you could, for example, retrieve data from a table based on the value of a particular item in a JSON document). JSON_VAL can also be used in a case expression or an ORDER BY clause; and, the function can be utilized to create an index on a table column that holds JSON documents (similar in concept to an index on an XML column - you might want an index on customer number values in JSON documents in a column).

Other UDFs that pertain to JSON data stored in a Db2 table are JSON_LEN, JSON_TYPE and JSON_TABLE. You can get additional information about all of the JSON-related UDFs in the Db2 for z/OS Knowledge Center on the Web.

And there you have it. As a data services provider for applications that use the REST architectural style, Db2 for z/OS can build a JSON document from data in a Db2 database, with said JSON document then used to return data to a REST client (and a REST client invoking a Db2-provided service can send associated input data in a JSON document). Db2 for z/OS also provides (in conjunction with the IBM Db2 Accessories Suite for z/OS) the ability to store JSON documents in Db2 tables, and to work with those documents (and the data values therein) using SQL. Two different areas of JSON-related functionality, each of which can be valuable for addressing different requirements that involve data in JSON form. I encourage you to explore and evaluate these capabilities.

Wednesday, July 31, 2019

Db2 for z/OS - Talking About APPLCOMPAT (Part 2)

Last month, I posted to this blog part 1 of a 2-part entry on the APPLCOMPAT option of the Db2 for z/OS BIND PACKAGE command. In that entry I covered several aspects of APPLCOMPAT. In this part 2 piece, I will focus on one particular matter pertaining to APPLCOMPAT - namely, the management of this option for packages associated with the IBM Data Server Driver (or its predecessor, Db2 Connect) in a Db2 12 for z/OS environment.

The IBM Data Server Driver is used by client application programs that issue SQL statements, typically in the form of non-DBMS-specific interfaces such as JDBC and ODBC and ADO.NET, that target a Db2 for z/OS system to which the application has a TCP/IP connection. The Data Server Driver packages belong, by default, to a collection called NULLID, and in that collection the packages have a certain APPLCOMPAT value. That APPLCOMPAT value determines the SQL functionality that is available to application programs that use the Data Server Driver packages in the NULLID collection. Let's say that function level V12R1M500 is active on a Db2 12 system. That M500 function level of Db2 12 is analogous to what was called new-function mode for Db2 versions that came before Db2 12, in that it makes available to programs a whole boat-load of new features and functions. But are those functions - such as piece-wise DELETE and query result set pagination - really available to DRDA requester applications (i.e., applications accessing Db2 via the IBM Data Server Driver and Db2's distributed data facility, aka DDF)? They are NOT available to those applications if the APPLCOMPAT value of the IBM Data Server Driver packages being used is below V12R1M500. [It's true that a dynamic SQL-issuing application could change the in-effect application compatibility level by changing the value of the CURRENT APPLICATION COMPATIBILITY special register, but starting with a Db2 12 system for which the activated function level is V12R1M500 or higher, a program can't set CURRENT APPLICATION COMPATIBILITY to a value greater than the APPLCOMPAT value of the package associated with the program.]

OK, so it would be a good idea, in a Db2 12 environment for which function level V12R1M500 had been activated, to have the IBM Data Server Driver packages in the NULLID collection bound with an APPLCOMPAT value of V12R1M500. That way, DRDA requester programs will be able to use the rich SQL functionality available at the M500 function level (I am thinking here of DRDA requester programs that do not have all of their "table-touching" SQL statements issued by Db2 stored procedures - the SQL functionality available to a stored procedure is determined by the APPLCOMPAT value of the stored procedure's package). But what about functionality beyond the V12R1M500 function level? Suppose, for example, that a DRDA requester program needs to issue a query with the LISTAGG function introduced with function level V12R1M501? In that case, you'd need the IBM Data Server Driver packages to be bound with APPLCOMPAT(V12R1M501) or higher. The question is: should you bind the packages in the NULLID collection with APPLCOMPAT(V12R1M501) or higher?

My answer to that question is, "No." Once I had my NULLID packages bound with APPLCOMPAT(V12R1M500), which I'd likely do not long after activating function level V12R1M500 for the Db2 system, I'd leave them at that application compatibility level. Here's why: if you keep rebinding the NULLID packages with ever-higher APPLCOMPAT values as you activate ever-higher function levels for a Db2 12 system, the result would be that DRDA requester programs would, by default, use higher and higher levels of Db2 application compatibility. What's wrong with that? Well, it could cause a DRDA requester application to "stumble into" a SQL incompatibility problem (as I mentioned in part 1 of this 2-part entry, a SQL incompatibility is a situation in which the same SQL statement operating on the same data yields a different result). Suppose, for example, that function level M508 introduces a SQL incompatibility (this is theoretical - as of the date of my posting this blog entry the latest available function level is M505). If you activate function level V12R1M508 for your Db2 12 system and thereafter rebind the NULLID packages with APPLCOMPAT(V12R1M508), it means that DRDA requester programs will by default be executing with the application compatibility level at V12R1M508. Suppose the (theoretical) SQL incompatibility ends up "biting" a DRDA requester application, causing it to fail? The big problem here is that the developers who worked on that application were blindsided by the SQL incompatibility. They didn't know that their program was going to execute with application compatibility set to V12R1M508 - that just happened because the NULLID packages (the IBM Data Server Driver packages that are used by default by DRDA requester applications) were bound with APPLCOMPAT(V12R1M508). No one likes unpleasant surprises.

OK, so you keep the NULLID packages with APPLCOMPAT(V12R1M500), as I'm recommending. How are we supposed to let developers of DRDA requester applications use SQL functionality introduced with function levels beyond M500? Here's how: with alternate IBM Data Server Driver package collections (meaning, collections with names other than NULLID) to which the NULLID packages have been BIND COPY-ed with a desired APPLCOMPAT value that is greater than V12R1M500. And how do you point a DRDA requester application to such an alternate IBM Data Server Driver collection? That can be done on the client side by way of a data source property, but the way that I see as being better (in that it doesn't introduce a hassle for client-side developers or application server administrators) is to use the Db2 profile tables to automatically point a particular DDF-using application to the desired alternate IBM Data Server Driver package collection (this is done via a profile attribute that sets the value of the CURRENT PACKAGE PATH special register to the name of the alternate IBM Data Server Driver package collection, as described in an entry I posted last year to this blog).

Here's how this scenario might play out in your environment, with a developer named Steve communicating a need to a Db2 for z/OS DBA named Sarah (again, I'm using a theoretical M508 future function level):

Steve: Hey, Sarah. Your team told us to let you know if we need Db2 SQL functionality beyond what's available at the M500 function level. I think I have that need. I want to use the XYZ built-in Db2 function for a new client-server application that will access Db2 for z/OS-based data, and it seems that the function isn't available with M500 because I get an error in our development environment when I try to use it.

Sarah: Yeah, function XYZ was delivered with function level M508, and our default function level for Db2 client-server applications is M500. We activated function level M508 for our Db2 for z/OS systems a few months ago, and we can get your application set up to work at that application compatibility level, no problem. All I need from you is a way to identify your client-server application, because we'll make a Db2-side change that will key off of that identifier. It could be the auth ID that the application uses to connect to the Db2 system, or the IP addresses of the servers on which the application will run, or a workstation name provided by the application, or...

Steve: Auth ID would make the most sense. This particular application on which we're working will connect to the Db2 system using auth ID 1234.

Sarah: OK, great. We'll get a Db2 profile set up for that auth ID, and that profile will enable your application to use M508 functionality. By the way, the M508 function level introduced a SQL incompatibility: if you provide a timestamp value as the argument for the ABC built-in Db2 function, the result at the M508 application compatibility level is thus-and-such, whereas before M508 the result would be so-and-so. Would that SQL incompatibility impact the application on which you're working?

Steve: No, we aren't using the ABC built-in function - but thanks for that heads-up.

Sarah: Sure thing. We should have the new Db2-side profile ready for you in the development system by end-of-day tomorrow - we'll notify you when that work is done. You should then be able to use the Db2 XYZ function with no problem.

Steve: Thanks!

OK, get it? Several very good things happened here. First, Steve let Sarah know about a new Db2 SQL function that his team wants to use - it's always nice when the Db2 team gets word about new SQL functionality that developers want to use. Second, Sarah, by way of a Db2 profile, will quickly enable Steve's team to use the desired new SQL functionality without having to do anything special (such as change a data source property) on the client side. Finally, this exchange gave Sarah the opportunity to inform Steve of a SQL incompatibility introduced with (the theoretical) M508 function level of Db2 12. It turned out that this SQL incompatibility was not going to impact the application on which Steve is working, but if that had not been the case - if the SQL incompatibility were going to be problematic for the new application - then knowledge of the incompatibility would give Steve and his teammates a chance to work around it by way of a code change.

I hope that this blog entry has provided you with what you'll see as being a reasonable way of managing APPLCOMPAT values beyond V12R1M500 for the IBM Data Server Driver packages in a Db2 12 for z/OS environment.

One more thing: for a time, starting last year, we (IBM) required DRDA client-side programmers to appropriately set the value of a client-side parameter called clientApplCompat if use of SQL functionality beyond the V12R1M500 function level were desired. Db2 teams at a number of sites were not keen on that requirement, as they want to make working with Db2 for z/OS-based data as hassle-free as possible for their application development colleagues. IBM Db2 for z/OS development heard the voice of the customer, and responded with APAR PH08482, the fix for which removed the need to set clientApplCompat in order to use Db2 12 SQL functionality beyond the M500 function level.

Sunday, June 30, 2019

Db2 for z/OS - Talking About APPLCOMPAT (Part 1)

The Db2 for z/OS package bind option APPLCOMPAT is not new - it was introduced with Db2 11. Even so, I think that now is a good time to spend some time thinking about APPLCOMPAT, for two reasons: 1) I've encountered some misunderstandings pertaining to APPLCOMPAT, and I'd like to clear those up; and 2) Db2 12 has introduced some changes that relate to APPLCOMPAT, and folks should be aware of those changes.

In this first of a two-part blog entry, I'll cover several aspects of APPLCOMPAT. In the part 2 entry, which I'll post within the next 30 days or so, I'll focus on one particular issue, that being the APPLCOMPAT value used for IBM Data Server Driver / Db2 Connect packages in a Db2 12 for z/OS environment.


What does APPLCOMPAT do?

APPLCOMPAT has a twofold purpose. First, it enables the use of SQL functionality introduced with a Db2 version and/or (in a Db2 12 environment) function level. Consider, for example, the LISTAGG built-in function delivered with Db2 12 function level V12R1M501 (LISTAGG is pretty slick, making it easy to have, as a column in a query result set, a comma-separated list of values). Suppose that an application developer wants to use LISTAGG in a query issued by his program. If the program's package is bound with APPLCOMPAT(V12R1M500) - or any other value below V12R1M501 - then the query that includes LISTAGG will get a SQL error code at execution time (most likely a -4743, ATTEMPT TO USE NEW FUNCTION WHEN THE APPLICATION COMPATIBILITY SETTING IS SET FOR A PREVIOUS LEVEL). The developer will need to change the query so that LISTAGG is not invoked, or the program's package will have to be rebound with an APPLCOMPAT value that is equal to or greater than V12R1M501 (the APPLCOMPAT value in a Db2 12 system can be as high as the system's currently-activated function level).

APPLCOMPAT's other purpose is to provide a shield that protects a program from what the Db2 documentation calls a "SQL incompatibility." I prefer the phrase, "SQL behavioral change," but whether you use that wording or "SQL incompatibility," what you're talking about is a situation in which the same SQL statement, executed with the same data, yields a different result after activation of the functionality provided by a new Db2 version (or a new Db2 12 function level). Here's an example of a SQL incompatibility: while in a Db2 10 system a SQL statement can successfully cast an 8-byte store clock value AS TIMESTAMP, in a Db2 11 new-function mode system that same SQL statement, operating on the same 8-byte store clock value, will get a -180 SQL error code - this because in a Db2 11 NFM environment a store clock value is no longer a valid input for a CAST(... AS TIMESTAMP) operation. Suppose you have a program with a SQL statement that casts a store clock value as a timestamp, and suppose changing that SQL statement (to avoid the -180 error code in a Db2 11 NFM system) is not feasible for some reason. In that case, the program's package could be bound with APPLCOMPAT(V10R1), and as a result the SQL statement will get, in a Db2 11 NFM system (or a Db2 12 system), the same result that it would get in a Db2 10 system, because APPLCOMPAT(V10R1) indicates that behavior for the package's SQL statements is to be what it would be in a Db2 10 system, regardless of the fact that the package is executing in a Db2 11 NFM (or Db2 12) environment (we always document SQL incompatibilities - the change related to casting store a clock value as a timestamp is one of the Db2 11-introduced incompatibilities).


Where does a package's APPLCOMPAT value come from?

As already mentioned, APPLCOMPAT is a package bind option. It is also a ZPARM parameter. That ZPARM has one and only one purpose: it provides the default value of APPLCOMPAT, when needed, for a BIND PACKAGE or REBIND PACKAGE operation. When is the ZPARM-provided default needed? For BIND PACKAGE, it is needed when an APPLCOMPAT value is not specified in the command. For REBIND PACKAGE, it is needed when an APPLCOMPAT value is not specified in the command and "not already there" for the package being rebound. I'll explain what I mean by "not already there." When a package is rebound and an APPLCOMPAT value is not specified in the REBIND PACKAGE command, the package's existing APPLCOMPAT value, if anything, will carry forward through the REBIND operation (so, if a package with an APPLCOMPAT value of V10R1 is rebound and an APPLCOMPAT value is not specified in the REBIND PACKAGE command, the rebound package will retain the V10R1 APPLCOMPAT value). "If anything" implies that the package may not have an explicit APPLCOMPAT value prior to the rebind, and that could indeed be the case (this situation would be indicated by a blank value in the APPLCOMPAT column in the package's row in the SYSIBM.SYSPACKAGE catalog table). When that is so the value used for the REBIND PACKAGE operation will be the value of the APPLCOMPAT parameter in ZPARM (as is the case when BIND PACKAGE is issued sans an APPLCOMPAT specification).

"How about dynamic SQL?" you may ask. "Can't a dynamic SQL-issuing program change the in-effect application compatibility level via the SQL statement SET CURRENT APPLICATION COMPATIBILITY?" Yes, that can be done, but starting with a Db2 12 system for which function level V12R1M500 or higher has been activated, the value specified for the SET CURRENT APPLICATION COMPATIBILITY special register cannot be greater than the APPLCOMPAT value of the program's Db2 package. Any dynamic SQL-issuing program has an associated package, which could be an IBM Data Server Driver / Db2 Connect package. If the package used for a dynamic SQL-issuing program is bound with APPLCOMPAT(V12R1M500) then V12R1M500 will be the initial value of the CURRENT APPLICATION COMPATIBILITY special register when the program executes, and the program can set the value of that special register to something less than V12R1M500 but not to a value greater than V12R1M500.


An optimizer-related misconception

Sometimes people think that APPLCOMPAT does more than it actually does. Case in point: SQL statement optimization. Lots of folks know that it's a good idea to rebind all plans and packages in a Db2 for z/OS system soon after going to a new version of Db2 (one reason: the package code generated by Db2 version n - packages are executable code, you know - is likely to be more CPU-efficient than the code generated for the same package by version n-1). Some of those same people think, however, that to take advantage of new access path choices available with the version n optimizer, APPLCOMPAT(n) - where n would be a value associated with the new Db2 version - has to be specified for the rebind operations. Nope. If you rebind a package with APPLCOMPAT(V11R1) in a Db2 12 system, you will be able to get for that package the performance benefit of access path choices introduced with Db2 12 (assuming that you did not go with the APREUSE option, which tells Db2 to reuse the existing access paths for the package's SQL statements).

Remember: APPLCOMPAT can enable use of new Db2 functionality, and APPLCOMPAT can serve as a shield to insulate a program from a SQL incompatibility introduced by a new version (or new function level) of Db2. APPLCOMPAT does not tell the Db2 optimizer what to do.


Db2 12 and the DDL dimension

In a Db2 11 system, APPLCOMPAT affects SQL DML statements (e.g., SELECT, INSERT, UPDATE and DELETE), but not DDL statements (e.g., CREATE, ALTER). That being the case, in a Db2 11 new-function mode environment a program whose package is bound with V10R1 cannot reference a global variable in a SELECT statement (global variables were introduced with Db2 11), but that same program can create a global variable (because DDL is not affected by APPLCOMPAT in a Db2 11 system).

Starting with Db2 12, APPLCOMPAT affects both DML and DDL statements; so, in a Db2 12 system, a program whose package has an APPLCOMPAT value of V12R1M500 cannot issue an ALTER TABLESPACE statement that includes a KEY LABEL specification (indicating that data in the table is to be encrypted using the key associated with the specified label), because KEY LABEL became an ALTER TABLE option starting with function level V12R1M502, and using that new DDL option would require that the package of the program issuing the ALTER TABLE statement be bound with an APPLCOMPAT value of V12R1M502 or greater.

This DDL dimension of APPLCOMPAT in a Db2 12 system gets particularly interesting starting with function level V12R1M504. With that function level (or higher) activated, a program whose package is bound with APPLCOMPAT(V12R1M504) or higher cannot use static SQL to create a traditional segmented table space or a range-partitioned table space that is not universal partition-by-range (in other words, that program cannot use static SQL to create a table space that is not either a partition-by-growth or a partition-by-range universal table space). Why is this so? Because plenty of Db2-using organizations, cognizant of the advantages of universal table spaces, do not want non-universal table spaces to be even accidentally created. Note also that a program whose package has an APPLCOMPAT value of V12R1M504 or higher cannot use static SQL to create a hash-organized table or a synonym (in the latter case one would create an alias instead).

Why did I repeatedly underline the phrase "cannot use static SQL" in the paragraph above? See the paragraph below.

What if you have a Db2 12 system with function level V12R1M504 or higher activated, and you need to create a traditional segmented table space? You have a couple of options for getting that done. One option would be to use a dynamic SQL-issuing program (an example being the Db2-supplied program called DSNTEP2) whose package has an APPLCOMPAT value of V12R1M504 or higher, and have that program first issue the SQL statement SET CURRENT APPLICATION COMPATIBILITY = 'V12R1M503' and then issue the CREATE for the traditional segmented table space (recall that in a Db2 12 environment in which function level V12R1M500 or higher has been activated, a dynamic SQL-issuing program can set the in-effect application compatibility level to something below the APPLCOMPAT value of its package, but not to something above that APPLCOMPAT value). The other option: create the traditional segmented table space via a program whose package has an APPLCOMPAT value of V12R1M503 or lower.

That's all for now. Tune in in about a month to see part 2 of this blog entry and get some guidance on specifying APPLCOMPAT for the IBM Data Server Driver / Db2 Connect packages in a Db2 12 system.

Friday, May 31, 2019

Db2 for z/OS: IIPHONORPRIORITY and zIIP offload

There is something I have encountered a couple of times (most recently, just a month ago) that can have a negative performance impact on a Db2 for z/OS system, and it's something of which quite a few Db2 people are, I think, unaware. I'm referring here to the setting of a z/OS parameter called IIPHONORPRIORITY, and I want to use this blog entry to provide information on how the value of that parameter can affect the processing of a Db2 workload.

IIPHONORPRIORITY is a parameter in the IEAOPTxx member of the z/OS data set called SYS1.PARMLIB. The value of the parameter can be set to NO or YES. Essentially, those two settings have the following meanings:

  • YES - Honor the priority of zIIP-eligible tasks (per the z/OS LPAR's workload manager policy), so that in the event that such a task is ready for dispatch and the LPAR's zIIP engines are busy, allow the task to be dispatched to a general-purpose engine so that wait-for-dispatch time for the task will not be too high.
  • NO - Regardless of the priority of zIIP-eligible tasks, limit the processing of such tasks to zIIP engines. If a zIIP-eligible task is ready for dispatch and the LPAR's zIIP engines are busy, require that the task wait until a zIIP engine becomes available.

Now, at first consideration a setting of IIPHONORPRIORITY=NO might appear to be an attractive alternative. After all, the raison d'ĂȘtre for zIIP engines is reduced cost of computing on a z/OS platform. Why not maximize that cost benefit by forcing all zIIP-eligible work to be processed by zIIP engines? In fact, however, if Db2 for z/OS is in the picture then specifying IIPHONORPRIORITY=NO can lead to significant performance degradation for an application workload (especially a DDF workload) and can even reduce offloading of work to zIIP engines. I have seen these effects with my own eyes, and I will describe them in more detail below.

Negative impact on application performance. Just about a month ago, as part of a review of an organization's production Db2 for z/OS environment, I looked through a Db2 monitor-generated accounting long report for the subsystem of interest. Data in the report, showing activity for a busy hour of the day, was ordered by connection type (meaning that the data was aggregated by connection type used by Db2-accessing programs: DDF, CICS, call attachment facility, etc.). In the DDF section of the report (connection type: DRDA), I saw that not-accounted-for time was 65% of in-Db2 elapsed time for DDF-using applications. In-Db2 not-accounted-for time is time associated with SQL statement execution that is not CPU time and not "known" wait time (the latter term referring to Db2 accounting trace class 3 wait times such as wait for synchronous read, wait for lock, wait for latch, etc.). In my experience, in-Db2 not-accounted-for time is primarily reflective of wait-for-dispatch time, and for a transactional workload (like the typical DDF workload) it should be a relatively small percentage of average in-Db2 elapsed time - under 10% is good, between 10 and 20% is marginally acceptable. 65% is an extremely elevated level of not-accounted-for time as a percentage of in-Db2 elapsed time.

I was initially perplexed by the very high level of in-Db2 not-accounted-for time seen for the DDF workload. The level of utilization of the system's general-purpose and zIIP engines, as seen in a z/OS monitor-generated CPU activity report, was not high enough to make that a prime culprit. Later, seeing unusual numbers in a Db2 monitor-generated statistics long report (more on that to come) prompted me to ask the question, "Do you folks by chance have IIPHONORPRIORITY set to NO?" It turned out that that was indeed the case, and was very likely the root cause of the very high percentage of in-Db2 not-accounted-for time observed for DDF-using applications: zIIP-eligible tasks servicing requests from Db2 client programs were spending a considerable amount of time queued up waiting for a zIIP engine to become available, with redirection to a general-purpose engine having been removed as an option via the IIPHONORPRIORITY=NO specification. A much better approach would be to have IIPHONORPRIORITY set to YES, and to have zIIP engine capacity sufficient to keep "spill-over" of zIIP-eligible work to general-purpose engines at an acceptably low level (information on monitoring spill-over of zIIP-eligible work to general-purpose engines can be found in a blog entry I posted on that topic); and, keep in mind that running zIIPs in SMT2 mode can help to minimize the zIIP spill-over rate.

Allowing some zIIP-eligible work to be directed to general-purpose engines when an LPAR's zIIP engines are busy (which should not often be the case when the LPAR is configured with an adequate amount of zIIP capacity), made possible via IIPHONORPRIORITY=YES, is a safety valve that enables consistently good performance for a zIIP-heavy workload such as that associated with DDF-using applications.

Potential non-maximization of zIIP offload. You might wonder, "How could setting IIPHONORPRIORITY to NO possibly have a reductive effect on offloading of work to zIIP engines?" I'll tell you how: when IIPHONORPRIORITY=NO is in effect, "Db2 does not allow system tasks to become eligible for zIIP processing." I have that in quotes because the words are taken directly from a page in the Db2 for z/OS Knowledge Center on IBM's Web site (on that page, scroll down to the part under the heading, "IIPHONORPRIORITY parameter"). To understand what the quoted phrase means, consider that zIIP-eligible Db2 tasks can be broadly divided into two categories: user tasks and system tasks. zIIP-eligible user tasks (which can also be thought of as application tasks) include those under which SQL statements issued by DRDA requesters - and SQL statements issued by native SQL procedures called through DDF - execute. Those zIIP-eligible tasks must be processed by zIIP engines when IIPHONORPRIORITY is set to NO. zIIP-eligible system tasks include those under which operations such as prefetch reads and database writes execute, and those tasks cannot run on zIIP engines when IIPHONORPRIORITY is set to NO (because in that case Db2 makes those tasks non-zIIP-eligible).

So, think about it: if you have sufficient zIIP capacity to keep spill-over of zIIP-eligible work to general-purpose engines at a low level, which should be the case in a production Db2 for z/OS environment (and I consider "low level" to be less than 1%), and you have IIPHONORPRIORITY set to NO, you might actually be causing total zIIP offload of Db2-related work to be lower than it otherwise would be, because prefetch reads and database writes, which are 100% eligible when IIPHONORPRIORITY is set to YES, are 0% zIIP-eligible when IIPHONORPRIORITY=NO is in effect. That's no small thing - in some Db2 systems, there is a great deal of prefetch read activity. I mentioned previously that a telltale sign that IIPHONORPRIORITY is set to NO in a z/OS system in which Db2 runs can be found in a Db2 monitor-generated statistics long report. In such a report, scroll down to the section in which CPU consumption attributed to the various Db2 address spaces is shown. In that report section, check the CPU times for the Db2 database services address space. If IIPHONORPRIORITY is set to YES, it is highly likely that a majority - often, a very large majority - of that address space's CPU time is in the zIIP column (the column in the report labeled PREEMPT IIP SRB, or something similar to that) versus the general-purpose engine column (labeled CP CPU TIME, or something similar). This is a reflection of the fact that prefetch reads and database writes very often constitute most of the work done by the Db2 database services address space, in terms of CPU resources consumed. If, on the other hand, IIPHONORPRIORITY is set to NO, it is quite likely that a majority - and maybe a large majority - of the Db2 database services address space's CPU time will be in the general-purpose engine column versus the zIIP column.

Bottom line: if you want to actually maximize offloading of Db2-related work to zIIP engines, do two things: 1) ensure that IIPHONORPRIORITY is set to YES, and 2) have enough zIIP capacity to keep the zIIP spill-over rate (the percentage of zIIP-eligible work that ends up being executed on general-purpose engines) below 1%. Refer to the blog entry to which I provided a link, above, to see how to calculate the zIIP spill-over rate using numbers from a Db2 monitor-generated accounting long report.

Is there any situation in which going with IIPHONORPRIORITY=NO might be reasonable? I'd say, "Maybe," if you're talking about a test or development environment in which optimal DDF application performance is not very important. If the z/OS system in question is one in which a production Db2 subsystem runs, I'd be hard pressed to come up with a justification for IIPHONORPRIORITY=NO. For a production Db2 system, you want IIPHONORPRIORITY=YES. You might want to check on this at your site.

Friday, April 26, 2019

Db2 12 for z/OS Statistics Profiles: Just What the Optimizer Ordered

Here is an interesting story for you: not long ago, an analytics-type query was executed on a Db2 for z/OS system that had recently been migrated to Db2 12. The query was cancelled after it had run for 23 hours. A DBA noticed that there was a row for a table targeted by the query in SYSIBM.SYSTABLES_PROFILES in the Db2 catalog. He ran RUNSTATS for the table, specifying USE PROFILE so that the utility would pick up the options specified in the aforementioned statistics profile. The DBA observed that the RUNSTATS job "ran a little longer than usual," an indication that the utility was doing something it hadn't previously done in generating statistics for the target table. The formerly very-long-running query was subsequently re-submitted, and it successfully ran to completion in 5 minutes. Do I have your attention? Good - read on.

First, the dramatic improvement in query response time described above is not about magic. It's about statistics. Db2 for z/OS has a very sophisticated query optimizer (IBM invented cost-based SQL statement optimization more than 35 years ago), but to do its job well that optimizer needs the right statistics related to tables accessed by queries. How important are those statistics? This important: the team at the IBM Support Center that works on problems involving query performance has said for years that over 75% of these problems are resolved not by rewriting the query, not by creating or modifying an index, but by providing the Db2 optimizer with the statistics it needs to generate a well-performing access plan.

Knowing that, you might think, "OK, so how do I know what statistics the optimizer needs in order to generate the access plan that will deliver the best-possible performance for a query?" And there, traditionally, has been the rub: it can be tricky to get this right, especially for a complex query. If you have the right skills and experience, you can pore over a query access plan as rendered by a visual EXPLAIN tool, and make a good determination of the statistics, currently absent from the catalog, that would appear to be useful to the optimizer for the case at hand. Alternatively, you could "go big" on statistics-gathering for your tables, and instruct RUNSTATS (or LOAD or REORG when inline statistics-gathering is requested) to generate every possible item of statistical data for your tables, but that would substantially increase the CPU cost of utility execution. There's also the option of using a tool that has a statistics recommendation feature (an example of a product in that space is IBM's Db2 Query Workload Tuner for z/OS). Or - and this is new with Db2 12 - you can let Db2 take care of this.

Let me expand on what Db2 12 has introduced with regard to getting catalog statistics right. Db2 11 for z/OS introduced a new catalog table, called SYSIBM.SYSSTATFEEDBACK, into which Db2 places information about statistics seen by the optimizer, in the course of generating query access plans, as being absent or inconsistent (regarding the latter, consider that one can insert statistical information into catalog tables on one's own, perhaps causing an inconsistency - a value in one column that should reflect the value in another column, but doesn't). That's fine, but sometimes that statistics feedback data would just sit there - you could look at it and use it (if you correctly interpreted what you saw), or a tool could read and act on it, but Db2 itself wouldn't do anything with SYSSTATFEEDBACK data. That changed with Db2 12: in a Db2 12 system (assuming that the new STATFDBK_PROFILE parameter in ZPARM is set to the default value of YES), when data is written to SYSIBM.SYSSTATFEEDBACK, data is also written to SYSIBM.SYSTABLES_PROFILES - either a statistics profile row will be added for a table, or an existing statistics profile row will be updated. That new or updated statistics profile row provides the RUNSTATS specifications that will gather the statistics that the Db2 query optimizer itself has indicated that it needs to do what it does to best possible effect.

AND, when I say, "RUNSTATS specifications," don't think that a Db2 12-generated statistics profile is usable only with RUNSTATS. In a Db2 12 environment you can also specify USE PROFILE when executing LOAD or REORG - something you couldn't do in a Db2 11 system.

What if you want to see what kind of statistics profiles Db2 12 is generating? That information is in the PROFILE_TEXT column of SYSIBM.SYSTABLES_PROFILES. The data type for the PROFILE_TEXT column is CLOB. You can view data in a CLOB column via a query issued through IBM Data Studio or IBM Data Server Manager (two no-charge, download-able tools); alternatively, you could view PROFILE_TEXT data by issuing a query through SPUFI, if you cast the CLOB column as VARCHAR. In the example below (a query I've executed using SPUFI), I've gone with VARCHAR(500) - you can specify whatever length you need in your environment.

SELECT CAST(SCHEMA AS CHAR(30)) AS SCHEMA,    
  CAST(TBNAME AS CHAR(30)) AS TBNAME,         
  CAST(PROFILE_TEXT AS VARCHAR(500)) AS PROFILE_OPTIONS
  FROM SYSIBM.SYSTABLES_PROFILES
  WHERE...

Given the enhancement described in this blog entry, I think it would be a very good idea to utilize the USE PROFILE option of RUNSTATS (and of LOAD and REORG, as appropriate) in a Db2 12 system. What if you do that when executing RUNSTATS (or LOAD or REORG) for a table, and there is no row for the table in SYSIBM.SYSTABLES_PROFILES? Not to worry: in that case, COLUMN(ALL) INDEX(ALL) will be used for the RUNSTATS (or LOAD or REORG) job (if a RUNSTATS or LOAD or REORG job is executed without a table being named, and if no row(s) for the table(s) in the target table space exist in SYSIBM.SYSTABLES_PROFILES, the job will be executed using options TABLE(ALL) and INDEX(ALL)). What if you have already populated SYSIBM.SYSTABLES_PROFILES via the use of SET PROFILE (or UPDATE PROFILE) in RUNSTATS jobs? Again, not a problem. Those user-developed statistics profiles can still be used in a Db2 12 environment. Keep in mind that Db2 12 will create or update a statistics profile row in SYSTABLES_PROFILES only when the Db2 optimizer determines that statistics needed to properly optimize a query are missing and/or inconsistent. If a statistics profile you generated yourself gathers the statistics that the optimizer needs, Db2 will not change that profile in the catalog.

For a long time, when confronted with a poorly-performing query, Db2 DBAs have slogged through query analysis and tuning, often a time-consuming task. If your Db2 for z/OS system is at Version 12 (and STATFDBK_PROFILE is set to YES in ZPARM), I'd recommend a different approach in the face of a query performance problem: first, see if a row (or rows) for the table (or tables) accessed by the query exist in SYSIBM.SYSTABLES_PROFILES. If there is such a row (or rows) in that catalog table, execute RUNSTATS for the table(s) with a USE PROFILE specification. Then re-submit the query. If performance is still not what it needs to be, engage in the usual query analysis process. If RUNSTATS with USE PROFILE does the trick for you, you've saved yourself a lot of time and effort.

Oh, and one more thing: when the query I referenced at the beginning of this blog entry was initially resubmitted following execution of RUNSTATS with USE PROFILE for the target table, its performance did not improve at all. Why? Because the previously-prepared form of the query was still in the Db2 subsystem's dynamic statement cache. The DBA I mentioned ran RUNSTATS with INVALIDATECACHE YES to invalidate that prepared form of the statement, and when the query was subsequently submitted it was reoptimized using the statistics generated through the Db2-generated profile, and that's when runtime went from 23-hours-then-cancel to 5 minutes.

Gather catalog statistics using profiles that Db2 generates based on optimizer feedback, and you, too, could realize some nice performance results.

Sunday, March 31, 2019

A Case Study: Implementing a Db2 for z/OS Implicit Trusted Connection

I've been interested in the role and trusted context features of Db2 for z/OS for a long time (I posted a two-part blog entry on the topic back in 2011). Theory is nice, but what about practice? Recently, I had an opportunity to assist a DBA in implementing implicit trusted connection functionality in his Db2 for z/OS environment (more on the implicit part in a moment). Today I'll share some of what the DBA and I learned through that endeavor.

First, the backstory: the DBA's organization wanted to tighten Db2 for z/OS data security by preventing unauthorized utilization of the authentication credentials used by an application to connect to a Db2 system. The kind of application we're talking about here is one that accesses a Db2 for z/OS server through a TCP/IP link via the IBM Data Server Driver (or its predecessor, Db2 Connect). In requesting a connection to a Db2 system, the application provides an authorization ID and an associated password (assuming authentication using an ID and a password, versus an ID and a certificate). Often, the ID and password in question are known by several of the application's developers. What is to keep someone from using the application's authorization credentials to connect to the Db2 system from, say, a laptop PC, thereby gaining access to data that should be accessible only through the application?

Enter roles and trusted contexts. These Db2 features work together to shut down the potential security exposure just described. How does this work? First of all, any Db2 privileges needed by the application (e.g., the SELECT privilege on table T1) are NOT granted to the application's Db2 authorization ID. Those privileges are instead granted to a Db2 role, which I'll call APP1ROLE. Next, a Db2 trusted context is created to specify the circumstances under which the Db2 privileges granted to the role can be used by an application. If the trusted context is named TRCON1, and if the application authenticates to the Db2 system using authorization ID APP1ID and the application runs on the two application servers at IP addresses 1.2.3.4 and 1.2.3.5, the Db2 DDL statement defining the trusted context might look something like this:

CREATE TRUSTED CONTEXT TRCON1
BASED UPON CONNECTION USING SYSTEM AUTHID APP1ID
DEFAULT ROLE APP1ROLE
ATTRIBUTES (ADDRESS ’1.2.3.4’, ADDRESS '1.2.3.5')
ENABLE;

Here's what the above DDL statement is essentially saying: "The Db2 privileges granted to role APP1ROLE can be used ONLY by the application that connects to Db2 using authorization ID APP1ID, and ONLY when that application connects to this Db2 system from either IP address 1.2.3.4 or IP address 1.2.3.5." Following creation of the role and trusted context, suppose that someone who know's the application's authentication credentials uses them to connect to the Db2 system from a laptop PC. Assuming a successful connection from the laptop PC, is that an "Uh-oh" situation? No. Why not? Because no Db2 privileges have been granted to the application's authorization ID (connecting to a Db2 for z/OS system doesn't get you anywhere if you can't do anything after connecting). But what about the privileges granted to role APP1ROLE, which we've associated with the application's authorization ID? Those privileges are not available to the person who used the application's credentials to connect to the Db2 system from a laptop PC because the laptop's IP address is not one of the two specified in the definition of the trusted context (those being the addresses of the two application servers on which the application runs). What has been accomplished through the creation of the role and trusted context is a vast reduction in what security people sometimes call the "threat area": instead of worrying about the application's credentials being misused from any IP address, we just need to secure the two IP addresses referenced in the trusted context definition (in other words, we need to ensure that an individual cannot make an unauthorized connection to the Db2 system from the two application servers on which the application of interest runs).

Back to my DBA friend. His first attempt at successful use of a Db2 role and trusted context didn't work: the SELECT privilege on table T1 was granted to a role but not to authorization ID APP1ID (using the example ID previously referenced), and an attempt to execute SELECT COL1 FROM T1 by way of a trusted context, using ID APP1ID, failed with a Db2 "ID not authorized to perform this action" error code. We decided to check to make sure that the IP address used in defining the trusted context was the right one. To help with that verification, we 1) turned auto-commit off on the tool being used by the developer who was helping us with the test, and 2) granted SELECT ON T1 to APP1ID. Why did we do these two things? So that we could see the IP address of the developer's Db2 thread in Db2 -DISPLAY THREAD command output (granting SELECT ON T1 to APP1ID ensured that the thread information wouldn't disappear due to the aforementioned authorization error, and making sure that a commit wasn't issued after the successful SELECT similarly preserved the thread's information, which would have disappeared when the thread went into a disconnected state following a commit).

After we'd taken those two actions to keep the developer's thread in a connected state following the SELECT FROM T1, we had the developer try again. Sure enough, we saw information about the developer's still-connected Db2 thread in -DISPLAY THREAD(*) DETAIL output, but the client IP address information was missing. That information was absent because -DISPLAY THREAD(*) DETAIL was an overly-generic form of the command -DISPLAY THREAD. When we switched to the form -DISPLAY THREAD(*) LOCATION(*) DETAIL, which tells Db2 that you only want to see information about DDF threads, we got additional DDF-related information in the command, including the client IP address associated with the developer's transaction (that address is in the V445 section of the command's output). The client address we saw was the one used in the definition of the trusted context, so why didn't the trusted context work for us? Because in this case the developer's connection to the Db2 system was through a Db2 Connect "gateway" server (versus a direct connection from the client via the IBM Data Server Driver), and in that case the client IP address that Db2 "sees" is the address of the Db2 Connect gateway server, not the address of the client that is "upstream" from the Db2 Connect gateway server.

OK, so we addressed that problem by re-defining the trusted context, this time using the IP address of the Db2 Connect gateway server. Well, not exactly. We actually re-defined the trusted context by specifying the domain name of the Db2 Connect gateway server, instead of its IP address. That can certainly be done in a CREATE (or ALTER) TRUSTED CONTEXT statement, and such a specification can be useful when (as was the case for my DBA friend's organization) the IP address of the Db2 Connect gateway server used by an application can change from time to time (if the application server references the Db2 Connect gateway server's domain name, a change in the gateway server's IP address does not require a client-side change - the domain name is simply mapped to the new gateway server's IP address versus the address of the old gateway server). We tried again with the redefined trusted context AND IT STILL DIDN'T WORK. ARRGGGHHH. What was going on? Further investigation revealed the problem: the domain name we were given by a network administration colleague of the DBA was actually associated with a pair of IP addresses (corresponding to a pair of Db2 Connect gateway servers), and for a domain name specification in the ATTRIBUTES part of a CREATE (or ALTER) TRUSTED CONTEXT statement to work properly, it has to be associated with a single IP address.

Once again, the DBA re-defined the trusted context, this time using two domain names as address attributes, with each domain name being specific to the one IP address of one of the two Db2 Connect gateway servers in a cluster. He revoked SELECT ON T1 from the APP1ID authorization ID, and had the developer try again with a SELECT from T1 using APP1ID, AND IT WORKED!! The successful execution of the SELECT from T1 showed that the privilege granted to the role APP1ROLE was being used, and indeed we saw in the output of a -DISPLAY THREAD(*) LOCATION(*) DETAIL command that the developer's thread was using the established trusted context (that information is in the V485 section of the command's output).

Now, about the "implicit" nature of the trusted connection, referenced way up at the top of this blog entry (and in the entry's title): I wanted to make sure to emphasize the implicit nature of the trusted connection used in this example, because so much of the documentation on Db2 for z/OS trusted contexts pertains to explicit trusted connections. What's the difference? Well, in the latter case a client application explicitly requests a trusted connection to the Db2 server, whereas in the former case the connection is just a "regular" connection from the client application's perspective, and it becomes a trusted connection because a trusted context matching the application's authorization ID and IP address (or domain name or whatever other attribute is specified) has been defined on the Db2 side. If you can get a trusted connection to a Db2 server (one that enables use of the Db2 privileges granted to a role associated with the trusted connection) without having to explicitly request such a connection, why would you go the explicit route? You'd go that route if you needed functionality that is ONLY available for explicit trusted connections. An example of that functionality: an application, having requested and obtained an explicit trusted connection to a Db2 for z/OS server, can then tell Db2 that it is "switching" use of the explicit trusted connection from its ID (the ID the application used to connect to the Db2 server) to the ID of an end user of the application. That switching from one ID to another (and the associated flowing by the application of an actual end-user's ID to the Db2 server system) is one of the things that has to happen if you want to make use of (for example) the enterprise identity mapping capability provided by RACF (whereby an individual's "network" ID can be mapped to a RACF ID). Note that a trusted context defined for the purpose of supporting explicit trusted connection functionality would likely include the WITH USE FOR clause in the CREATE TRUSTED CONTEXT statement - that clause can be used to reference end-user IDs passed to Db2 by an application. In the situation I've described in this blog entry, the DBA did not require explicit trusted connection functionality, so setting up implicit trusted connection capability was appropriate.

I hope that the information I've provided here will be useful to you if you want to utilize implicit trusted connection functionality in your Db2 for z/OS environment.