Thursday, April 28, 2022

Db2 for z/OS: Answering Some Questions About DBATs (i.e., DDF Threads)

Sometimes, a DBA will email me a question about Db2 for z/OS, and I'll respond in a pretty comprehensive way, and I'll look at that outbound message and think to myself, "Hmm. I may have written most of a blog entry there." This is one of those cases. I recently got a couple of questions about Db2 database access threads, or DBATs (threads associated with access to a Db2 subsystem through the distributed data facility, aka DDF), and I think the questions and answers might be of interest to a lot of people in the Db2 community. I am therefore packaging them in this blog entry. Off we go:

Question: When does a DBAT go from being active to idle, so that it is subject to the idle thread timeout setting for the Db2 subsystem?

Answer: OK, the first thing to understand here is that a DBAT is always active. Even when it's in the DBAT pool, a DBAT is active - it's just in a so-called disconnected state. A DBAT is idle when it is in-use (i.e., paired with a connection, which happens when a transaction comes along by way of that connection) and it’s not doing anything (or, more accurately, the transaction that was using the DBAT appears to Db2 to be doing nothing). It's normal for there to be some idle thread time for a DDF transaction - a client-side program issues a SQL statement, the result is sent back to that program, and the DBAT is briefly idle until the transaction's next SQL statement is issued. No big deal there. It's when idle time becomes really elongated that a DBAT might be affected by the Db2 subsystem's idle thread timeout value. That timeout value is specified via the IDTHTOIN parameter in the Db2 DSNZPARM module. The default value for IDTHTOIN is 120 seconds (check out the value on your Db2 system, and see if it's set to something other than 120).

Normally, at end-of-transaction there is a commit, and at that time the DBAT that had been used in processing the transaction goes back to the DBAT pool and the connection with which the DBAT had been paired goes back to an inactive state (inactive connections, a key contributor to Db2's connection scalability, are a server-side thing, invisible to a connected application - an inactive connection will go back to an active state when the next transaction associated with the connection begins). Can a DBAT in the pool be affected by the Db2 system's idle thread timeout value? No, but it is subject a limit specified by another ZPARM parameter called POOLINAC (more on that to come).

Let's say that a DDF transaction starts but then never commits. That could happen because of a problem on the client application side, or it could be that the developer of the transaction program decided that a commit is not necessary because the transaction is read-only in nature (that in fact would be a not-good decision - every DDF transaction needs to commit, because even a read-only transaction will hold one or more table space or partition locks and one or more claims on database objects, and those locks and claims will not be released without a commit). Because the transaction has not committed, it is perceived by Db2 to be still in-flight, and for that reason the transaction's DBAT can't be separated from the associated connection and returned to the DBAT pool. The apparently (to Db2) in-flight transaction continues to do nothing, and the related DBAT remains idle for a longer and longer period of time. Eventually the IDTHTOIN limit will be reached for the idle thread (unless IDTHTOIN is set to 0, which means a DBAT can remain indefinitely idle), and Db2 terminates the DBAT and the associated connection.

So, to recap: first, a DBAT does not go from active to idle, because a DBAT is always considered to be active - it's an in-use DBAT, as opposed to an in-the-pool DBAT, that can be idle. Second, an in-use DBAT will typically have at least some idle time (time when it seems to Db2 that the transaction associated with the DBAT is not doing anything - or, at least, not doing any SQL-related thing); it's when that "nothing SQL-related is happening" time gets long that the Db2 idle thread timeout limit can be reached for a DDF transaction and its DBAT.

[By the way, I mentioned earlier that when Db2 terminates a DDF transaction and its DBAT due to the idle thread timeout limit being reached, Db2 also terminates the connection with which the DDF transaction had been associated. If you'd prefer for Db2 to preserve the connection while terminating the transaction and the DBAT, you can get that behavior thanks to an enhancement introduced with Db2 12 for z/OS. The enhancement is enabled via specification of EXCEPTION_ROLLBACK as an attribute of a MONITOR IDLE THREADS row in the Db2 table SYSIBM.DSN_PROFILE_ATTRIBUTES. You can find more information about this enhancement in the Db2 for z/OS online documentation, at]

Question: We had a DDF transaction surge, and as a result the number of DBATs went way up. The surge passed, and several minutes later I checked on the number of DBATs and it was still way high. What's with that?

Answer: There were (I'm pretty sure) two factors involved here. First, the POOLINAC value. That's a ZPARM parameter. If a DBAT in the pool has gone a POOLINAC number of seconds without being reused for a transaction, that DBAT will be subject to termination by Db2. For the Db2 subsystem looked after by the DBA who asked me this question, the POOLINAC value was 900 seconds, considerably higher than the default value of 120 seconds (I personally favor setting POOLINAC to the default value of 120 and leaving it there unless there's a good reason to make a change). A high POOLINAC value will definitely slow down the trimming of the number of pooled DBATs after the passing of a DDF transaction surge, but I think something else was going on, as well.

That "something else" was a change in Db2's termination of "too-long-in-the-pool-without-reuse" DBATs, effected by the application of the fix for Db2 APAR PH36114 (that fix came out in June of 2021, and I say that "I think" the fix was involved in this situation, rather than "I know," because I did not verify that the fix was applied to the Db2 subsystem in question - I'm inferring that based on the behavior reported by the DBA). Here's the deal: prior to the change associated with PH36114, Db2 would check the DBAT pool every two minutes to see if any DBATs in the pool had been there for a POOLINAC number of seconds without being reused. However many "too-long-in-the-pool-without-reuse" DBATs were found, were terminated by Db2. If a lot of DBATs went back to the pool at around the same time following the rapid subsidence of a DDF transaction surge, Db2 might find in a subsequent pooled DBAT purge cycle that a lot of DBATs needed to be terminated at one time due to the POOLINAC limit being exceeded. With the ZPARM parameter REALSTORAGE_MANAGEMENT set to AUTO or ON (and AUTO is the default), terminating a lot of DBATs at one time could put a good bit of pressure on the z/OS LPAR's ESQA resource, which in turn could cause spill-over into ECSA, which in turn could be bad news for an LPAR with only a small cushion of unused ECSA space.

With the fix for PH36114 applied, two things changed in this picture:
  1. Db2 went from checking every two minutes for DBATs with "too-long-in-the-pool-without-reuse" status to doing that every 15 seconds.
  2. In a given purge cycle (again, now once every 15 seconds), Db2 will terminate a maximum of 50 DBATs in the "too-long-in-the-pool-without-reuse" category.
What this means: a big pile of pooled DBATs left over from a since-passed DDF transaction surge will be worked down more frequently and more gradually. That could somewhat elongate the process of finalizing the right-sizing of the DBAT pool for a now-back-to-normal volume of DDF transactions, but it will avoid the pressure on ESQA that could result from the more-aggressive purging of "too-long-in-the-pool-without-reuse" DBATs that Db2 did prior to the PH36114 fix. It's a good trade-off, in my opinion.

Maybe you now know a few things about DBATs that you didn't know before. Thanks for visiting the blog, and I hope you'll return sometime.

Wednesday, March 9, 2022

Thoroughly Assessing Data Security in a Db2 for z/OS Environment - Part 2

In part 1 of this two-part blog entry on thoroughly assessing data security in a Db2 for z/OS environment, I covered four aspects of Db2 data protection: privilege management, client authentication, data encryption and column masks/row permissions. In this part 2 entry we'll take a look at auditing, application architecture, test data management and RACF (or equivalent) management of Db2-internal security.

Off we go:

  • Auditing - You can manage Db2 privileges in a careful and responsible way, but at the end of the day users have to have some privileges in order to perform assigned duties, and some users are going to require extensive privileges. Privileges that were properly granted can be abused. The best defense against that possibility is effective auditing of users' data access activities. What you want in this case is to identify improper data access activity so that steps can be taken to shut it down. An important assist in this area was delivered with Db2 10 for z/OS, which introduced audit policy functionality. Db2's audit policy capability enables you to monitor (among other things):
    • Occurrences of access actions that failed due to inadequate authorization (if a particular user is getting a pretty good number of these, that could indicate attempts to probe for "holes" in your organization's data protection measures).
    • Occurrences of a user changing his or her SQL ID (there are times when this is a legitimate action, and times when it is not)
    • Occurrences of tables being altered (depending on the ALTER action, this could be an attempt to circumvent a data protection measure).
    • Occurrences of a particular table being accessed in read or data-change mode (is a table holding sensitive data values being accessed at odd hours?).
    • Utility execution (could someone be trying to use a Db2 utility as a "back door" means of data access?).
    • Incidences of privileges being granted or revoked (inappropriate granting of Db2 privileges can be a warning sign).
    • Use of system administration "super-user" privileges: install SYSADM, install SYSOPR, SYSOPR, SYSCTRL, or SYSADM (to quote a line from several "Spider-Man" movies: "With great power comes great responsibility").
    • Use of database and security administration "super-user" privileges: DBMAINT, DBCTRL, DBADM, PACKADM, SQLADM, system DBADM, DATAACCESS, ACCESSCTRL, or SECADM (see the "Spider-Man" quote above).

Db2 12 function level 509 introduced an important audit policy enhancement: tamper-proof audit policies. With that enhancement, an audit policy can be set up so that it can be changed only with the permitting action of a person outside the Db2 team (specifically, a RACF administrator).

You can read all about developing, activating and using audit policies in the Db2 for z/OS online documentation.

If you're interested in a data access monitoring solution that can span all of your enterprise's data stores, both on-prem and cloud-based, check out IBM Security Guardium Data Protection.

  • Application architecture - Can application architecture enhance Db2 data protection? Absolutely. Consider, for example, the security advantage of static versus dynamic SQL statements. If an application process will access data in table T1 via a dynamic query, the authorization ID of the application process will require the SELECT privilege on T1. If, on the other hand, the application process will access data in T1 by way of a static query, the application's ID will not need any table access privileges; instead, the ID will need only the EXECUTE privilege on the Db2 package associated with the static query. Reducing the granting of table-access privileges to application and/or user IDs can strengthen Db2 data security.
OK. But what about applications of the client-server variety, particularly those that access Db2 for z/OS data from network-connected Linux or UNIX or Windows servers? For such applications, use of client-issued static SQL statements is often either not possible or, if possible, not favored by client-side programmers (for example, a Java program can issue static SQL statements in SQLJ form, but in my experience Java programmers overwhelmingly prefer JDBC to SQLJ, and JDBC means dynamic SQL on the Db2 side). In those cases, two convenient ways to utilize static SQL are 1) Db2 stored procedures and 2) Db2 REST services.

Lots of application developers who prefer the JDBC and ODBC forms of SQL (to name two very popular forms of non-DBMS-specific SQL) are plenty happy with a stored procedure approach, as stored procedures are widely used with relational database management systems such as Db2. A programmer can use (for example) JDBC statements to call Db2 stored procedures and to retrieve rows from associated query result sets (when a stored procedure declares and opens a cursor). The stored procedure calls will be dynamic on the Db2 side, but the "table-touching" SQL statements issued by the stored procedures will be static, and that means that the application's ID will require only the EXECUTE privilege on the called stored procedures - not table access privileges.

Db2's built-in REST interface is another way to make static SQL easy to use from a client-side programmer's perspective. By way of this interface, which is an extension of the Db2 distributed data facility, a static SQL statement can be invoked via a REST request. The static SQL statement associated with a Db2 REST service can be a SELECT, an INSERT, an UPDATE, a DELETE, a TRUNCATE or a CALL (of a stored procedure).

Here's another security advantage of client-side programs invoking Db2 server-side static SQL statements, whether through the use of stored procedures or the Db2 REST interface (which can itself be used, as noted, to invoke stored procedures): when this approach is used, client-side programmers do not have to know anything about table or column names - that knowledge is needed only by the people who code the server-side static SQL statements. How does this shielding of database schema information enhance data security? Well, the fewer the people who know stuff like table and column names, the less likely it is that a database will be hacked by bad guys.

[Note: when a stored procedure is to be invoked through the Db2 distributed data facility, either through a SQL call or a REST request, that stored procedure will get up to 60% zIIP offload when executed IF the stored procedure is written in SQL PL (i.e., if it is a so-called native SQL procedure). A stored procedure written in a language other than SQL PL will get little to no zIIP offload when called through DDF.]

  • Test data management - Let's say you have a production Db2 database in which some sensitive data values are stored (e.g., credit card numbers). Perhaps you have taken a number of steps to protect those sensitive data values. Great. But now an application team wants tables in their Db2 development environment populated with data from the production system. You could copy data over from the production to the development system, but will the sensitive data values be protected in the development environment as they are in production? Even if the data protection measures in the development environment are as strong as those in place for the production Db2 system, creating another copy of data that includes sensitive data values will still involve some data-security risk because the data copy increases what a security auditor might call the "threat area" - do you want to accept that risk?
Often, the best approach to use in this situation is to mask or otherwise obscure the sensitive data values before (or as part of) copying production data to a development or test system. How would you accomplish that? You could do it on your own, but that can be a time-consuming effort and the "roll-your-own" data masking could impact the CPU and elapsed times of a production-to-test data copy operation. An easier (and often better-performing) way to get this done would be to use a software tool designed for the purpose. Two options in this space that are available from IBM are IBM InfoSphere Optim Test Data Management Solution for z/OS and IBM Db2 Cloning Tool for z/OS (the former is useful for copying a referentially complete subset of data rows from one Db2 system to another, while the latter is more appropriate for copying entire table spaces and indexes - or even an entire subsystem's data - from one Db2 system to another). Both of those tools have data masking capabilities, to prevent sensitive data values from being copied "as-is" from a production environment to a test or development system.

  • RACF (or equivalent) management of Db2-internal security - It is very common for RACF (or an equivalent z/OS security management subsystem) to be used for external Db2 security purposes - that is, to control which application processes and/or users can connect to a Db2 subsystem, and how. Once an application process or a user has successfully connected to a Db2 subsystem, what happens next is a matter of Db2-internal security: does the ID of the application or user have the Db2 privileges needed to (for example) read data in a table, or update data in a table, or create an index or bind a package? In my experience, Db2-internal security is most often managed within Db2 by the Db2 administration team, who use the SQL statements GRANT and REVOKE to provide privileges for, or remove privileges from, various authorization IDs. It is possible to use RACF (or equivalent) to manage Db2-internal security as well as Db2-external security, and a growing number of organizations are doing just that.

I posted an entry to this blog a couple of years ago with a good bit of information about using RACF to manage Db2-internal security. I won't repeat that content here; rather, I'll provide some thoughts and observations on this topic:

    • Why do organizations do this? Quite often (in my experience) it's because someone - perhaps a security auditor - told them that it has to be done. Why might that pronouncement be made? Well, in the minds of many security people, it's a good thing for a single group of people to manage all aspects of security for a database management system. Because RACF can be used to manage both Db2-external and Db2-internal security, while Db2's security features apply mainly to internal security, if one team is going to manage all aspects of Db2 for z/OS security then it's going to be the RACF team.
    • Db2 11 for z/OS eliminated what were just about the last two hassles that were formerly associated with RACF management of Db2-internal security. Prior to Db2 11, auto-rebinds could fail with authorization errors when RACF was used to manage Db2-internal security. Why? Because when an auto-rebind occurs you generally want Db2 to do that based on the privileges held by the ID of the owner of the package. It used to be that when RACF managed Db2-internal security, the authorization check for an auto-rebind looked at the privileges held by the ID of the application process that prompted the auto-rebind by requesting execution of a package that had been marked invalid by Db2, and that ID rarely has the privileges needed for a successful auto-rebind. Db2 11 fixed that problem by enabling RACF to check the ID of a package owner for auto-rebind authorization. The other nagging problem fixed by Db2 11 concerned caches of authorization information that Db2 maintains in memory. Information in those caches was formerly not updated to reflect security changes effected through RACF, the result being a frustrating lag between some RACF-side changes and enforcement of same in Db2. Db2 11 fixed that problem by having Db2 listen for ENF signals (referring to the z/OS event notification facility) sent by RACF when authorization changes are made.
    • Organizations that have gone to RACF management of Db2-internal security are pretty happy with the arrangement, and that includes the Db2 for z/OS DBAs. Yes, there is a good bit of set-up work involved in making this transition, and that can seem more challenging than it really is because Db2 DBAs and RACF administrators speak different languages in a technical sense, but once things are set up and the transition has been completed, people find that it really works as advertised. Ask a Db2 DBA at a site that has gone to RACF management of Db2-internal security if he or she is OK with the change, and you'll likely get a thumbs-up. I haven't found many (any, actually) DBAs in these organizations that pine for the days when they had to issue GRANTs and REVOKEs to manage Db2-internal security. Letting the RACF team handle Db2-internal security lets the DBAs focus on database administration tasks (e.g., performance tuning, application enablement) that they generally find to me more satisfying.
OK, that's what I've got. Consider these areas, and those I covered in part 1 of this two-part entry, and I think you'll be able to comprehensively evaluate the Db2 security set-up you have at your site. I hope that this information will be helpful for you.

Thursday, February 24, 2022

Thoroughly Assessing Data Security in a Db2 for z/OS Environment - Part 1

I regularly get questions from Db2 for z/OS people that pertain to data security. Most of the time, these questions have a pretty narrow focus - a DBA, for example, wants to know more about Db2 roles and trusted contexts, or about SECADM authority, or about "at-rest" encryption of Db2 data on disk. Recently, I had a meeting with some people from a client's mainframe IT staff, and they wanted to know what a comprehensive Db2 for z/OS data security review would look like. For me, that was a refreshingly wide-scope question. What areas would one want to examine, if one wanted to thoroughly assess the data security posture of a Db2 for z/OS system? In this part one of a two-part blog entry I will give you my take on the matter, starting with four areas of Db2 data protection: privilege management, client authentication, data encryption, and column masks and row permissions. In the part two entry, which I hope to post in about two weeks, I will cover four other areas of Db2 data protection: auditing, application architecture, test data management and RACF (or equivalent) management of Db2-internal security.

Onward, then, to the first four areas of Db2 data protection I would recommend considering as part of a comprehensive Db2 security assessment:

  • Privilege management - This is about the Db2 privileges that have been granted to user, group and application IDs. With regard to user privileges, the best-practice approach is to grant to a given individual only those privileges that minimally enable the individual to do his or her job. One area where there has been a lot of cracking down in recent years concerns the granting of SYSADM authority. That's basically super-user status, and years ago it was common for organizations to give the SYSADM authority level to quite a few people on the Db2 support team. Why did that practice become problematic? Mainly because someone with SYSADM authority can look at (and even change) the data in any table. You could say, "It's OK - no one on my Db2 team is a bad actor," but that argument is not likely to sway security auditors these days. For some organizations, the solution to over-granting of SYSADM is to change that authority, for many or even most of the people on the Db2 team, to DBADM WITHOUT DATAACCESS ON SYSTEM. That move can spark protest from someone who loses SYSADM authority, but in fact many DBAs can do the large majority of things they need to do with system DBADM authority. If access to data in a specific table is required for a DBA with system DBADM WITHOUT DATAACCESS authority, the SELECT privilege on the table can be granted and then revoked when the task requiring access to the table's data has been completed.

Here's another way in which super-user authority has been reigned in lately: organizations can set the value of the Db2 ZPARM parameter SEPARATE_SECURITY to YES. What does that do? It removes from SYSADM authority the ability to create and manage security objects (e.g., roles, trusted contexts, row permissions and column masks), and the ability to grant privileges to others (unless the ID with SYSADM authority holds the privilege in question WITH GRANT OPTION, or owns the object on which a privilege is being granted). How do those things get done, if a SYSADM can't do them? They are done by an ID with SECADM authority (more information in this area can be found in an entry I posted to this blog last year).

What about the group IDs that I mentioned above? Those have been around for a long time. A group ID is so called because it refers to a RACF (or equivalent z/OS security manager) group to which individual IDs can be connected. When the Db2-supplied sample connection and sign-on exits are used by an organization (very widely done), the RACF group IDs to which your primary authorization ID are connected become your secondary authorization IDs in Db2, and the ability to execute most Db2 SQL statements and commands depends on the privileges held by your primary authorization ID and by any of your secondary authorization IDs. This can make Db2 privilege management much simpler, especially if a set of privileges tailor-made to enable execution of a certain set of Db2 actions is fairly complex - you just grant that set of privileges to a RACF group ID, and then connect to that group ID the IDs of individuals who need to perform the associated set of Db2 actions.

Some security administrators and auditors worry - understandably - about the privileges granted to the ID of an application that issues dynamic SQL statements, especially when that application connects to Db2 via TCP/IP communication links and through the Db2 distributed data facility (DDF). Why the worry? Well, for a dynamic SQL statement, such as a SELECT, to execute successfully, the associated Db2 authorization ID needs to have the SELECT privilege on the target table. A DDF-using application that issues SQL statements in JDBC or ODBC form (these will be dynamic SQL statements on the Db2 side) usually connects to the Db2 system using a certain ID and an associated password. What if someone who knows that ID and password tries to use those credentials to connect to the Db2 system from a PC, and then view data in tables that the application can access? An effective defense against that scenario can be implemented using Db2 roles and trusted contexts, as described in a blog entry I posted a few years ago.

  • Client authentication - I just now referred to applications that access Db2 systems via TCP/IP communication links and through the Db2 distributed data facility, and there are often individual users that do the same thing, perhaps using a workstation-based query and reporting tool. How are these applications and users authenticated at connection time? As noted above, this is usually done by way of a password. Typically, an organization requires a user to change his or her password on a regular basis - for example, every three months. What about the password associated with an application's ID? There was a time when it was quite common for such a password to be of the "never expire" type. That kind of password is increasingly deemed unacceptable by security auditors, who insist that the password associated with an application's ID be regularly changed, just as is done for passwords associated with user IDs. That is in fact a good policy from a security perspective, but it can lead to authentication-related connection errors when an application's password is changed. What if an application's password is changed in RACF before it is changed on the app server side, or vice versa? The strategy I've seen employed for non-disruptively changing a Db2 client-server application's password involves having two IDs for a given application. Shortly before the password for ID1 is set to expire, the application starts connecting to Db2 using ID2 (whose password will be good for the next three months or whatever). Once all instances of the application have switched over to ID2, the password for ID1 can be updated (and maybe that doesn't happen until a little before ID2's password is set to expire). As long as the application is running on at least two app servers, the switch from the one ID to the other can be accomplished with no application outage. What makes this work on the Db2 side is the fact that all privileges needed for the application's dynamic SQL statements to execute successfully are granted to ID1 and to ID2.

What if you'd prefer for an application or a user ID to be authenticated by some means other than a password? Are there alternatives? Yes. One alternative is to use RACF PassTickets - you can find more information about RACF PassTickets in the online z/OS documentation. Another option is to use certificate-based authentication. Often, when one thinks about certificates in a Db2 for z/OS context, it is in relation to SSL encryption for communication between Db2 and a network-connected client application (more on encryption to come - see below); however, in my experience Db2 SSL encryption typically involves use of a server certificate versus client certificates (the host certificate is transmitted to the client system and is presented at connection time by the client application - the host recognizes its own certificate and the "SSL handshake" can proceed to successful completion). That said, it is also possible for a client system to present its own certificate as a means of authentication when requesting a connection to a Db2 system. A good source of information on certificate-based authentication for a Db2 client is a document titled, "Db2 for z/OS: Configuring TLS/SSL for Secure Client/Server Communications," which can be downloaded from the IBM redbooks Web site. Refer to the information under the heading, "Client access to Db2 using TLS/SSL client authentication," on page 61 of the document.

One other thing regarding client authentication. There is a parameter in the Db2 DSNZPARM module called TCPALVER. The default value for that parameter is NO, and that is almost certainly the value you want. If TCPALVER is set to YES in your Db2 environment, it means that Db2 assumes that a process wanting to connect to the Db2 system via TCP/IP is already verified. That being the case, the Db2 system will accept a TCP/IP client connection request that provides an ID but no authentication credential - no password or PassTicket or client certificate is required. Now, if you see that TCPALVER is set to YES for a Db2 subsystem, don't panic - almost certainly, RACF (or equivalent) will block a connection request that lacks an authentication credential; still, in the interest of having "belt and suspenders" security safeguards (a good idea), you'll probably want to change the TCPALVER value from YES to NO in the very near future. Before making that change, consider that a setting of TCPALVER=YES might have been put in place a long time ago, when the only clients connecting to the Db2 system via TCP/IP were other Db2 for z/OS systems. When that kind of communication was first happening (again, a long time ago), a requester Db2 for z/OS system might not have been sending a password when requesting a connection to a server Db2 for z/OS system, the thinking being that user authentication had already happened on the requester Db2 for z/OS side. If you have Db2 for z/OS systems communicating with other Db2 for z/OS systems using DRDA, without passwords being sent, those connection requests will fail after you change the TCPALVER value from YES to NO. To avoid that problem, update the Db2 communications database on the Db2 for z/OS requester side to have a password sent with a connection request to a DRDA server, and THEN change the TCPALVER setting from YES to NO.
  • Data encryption - I posted an entry to this blog in 2020 about data encryption in a Db2 for z/OS context, and I won't repeat that content here. Instead, I'll add a few extra nuggets of information:
    • APAR PH08188 (April of 2019) made it possible to configure a Db2 for z/OS system so that its only SQL listener port is a "secure" port (i.e., one that requires a requester to use SSL encryption). When a Db2 system is set up that way, it is not possible for a client application to establish a non-SSL connection to the Db2 server.
    • Db2's leveraging of the data set encryption feature of z/OS for encryption of Db2 table space and index data "at rest" (i.e., on disk) involves associating an encryption key label with a data set and RACF-permitting use of that key label. This has caused some people to be concerned about the need to provide RACF permission for various encryption ket labels to various application and user IDs. That concern is unfounded. When a user or an application issues a SQL statement that targets a Db2 table, and table space and/or index data sets on disk are accessed as a result, from the z/OS perspective it is not the user or application accessing the data sets - it's Db2 accessing the data sets (and the same is true for the "online" IBM Db2 utilities, such as LOAD and COPY and REORG - they access database objects through Db2). That being the case, only the IDs of the Db2 database services and system services address spaces need to be RACF-permitted to use the key labels associated with encrypted Db2 data sets (if a "standalone" Db2 utility, such as DSN1PRNT, is to be run for an encrypted database object, the ID of that utility job will need RACF permission for the object's key label, as the standalone utilities operate outside of Db2).
    • Plenty of people get data encryption and data masking mixed up. They are two different things (see below)
  • Column masks and row permissions - This is another area I covered pretty thoroughly in a previous blog post, and I won't repeat that content in this entry. What I will do is try to clear up some misunderstandings I've encountered over the years since Db2 introduced column mask and row permission functionality:
    • Data masking and data encryption really are two different things. One difference is that encryption is reversible (if you have access to the encryption key) while a data masking transformation can be irreversible (if, for example, a column mask changes a credit card number to XXXXXXXXXXX1234, there is no way for a user or a program to reverse those X's back to their pre-masked values).
    • A Db2 column mask changes no values in a table; instead, the mask transforms the values in a column before they are returned to a user or a program; so, the actual unmasked values are in the table, but a column mask prevents a user (or a set of users, if it checks for a certain group ID) from being able to see the unmasked values. The masking is accomplished by a SQL CASE expression that is automatically added to a query when the ID (or group ID) of a query-issuing process matches one specified in the CREATE MASK statement.
    • One of the nice things about a column mask is that it doesn't change values in a column of a table. That means a column mask will not affect a query's predicates (including join predicates) - those predicates will be evaluated using the unmasked values in the referenced column. The mask is applied (as previously noted) when the column in question appears in a query's select-list. If the mask changed values in a column, it could really throw query results out of whack in a bad way.
    • Column masks and row permissions really are a very robust way to prevent access to certain data values (a row permission prevents a process with a certain ID or group ID from being able to access particular rows in a table, through addition of a row-filtering predicate to queries issued by an ID that matches one specified in the CREATE PERMISSION statement). Here's what I mean by that: the CASE expression associated with a column mask, and the predicate associated with a row permission, will be automatically added to ANY query - static OR dynamic - issued by ANY ID specified in the CREATE MASK or the CREATE PERMISSION statement. Does your ID (which I'll call SMITH) have SYSADM authority? Doesn't matter - if I have a column mask or a row permission that states that SMITH (or, maybe, any ID other than, for example, XYZ) cannot see certain rows in a table, or unmasked values in a certain column, you're not going to be able to access those rows or those unmasked column values. The Db2 privileges held by your ID are irrelevant.

And that's a wrap for this part 1 blog entry. Check back in a couple of weeks for part 2, which will cover auditing, application architecture, test data management and RACF (or equivalent) management of Db2-internal security.

Thursday, January 20, 2022

Db2 for z/OS: the REST Interface and IN-list Predicates

I've written several blog entries on the REST interface to Db2 for z/OS, including one posted early last year. How does it work? Pretty simple: you first code a SQL statement that you want to make REST-invokable; then, you create a REST service associated with that SQL statement, either by using Db2ServiceManager (a REST service provided by Db2 for creating REST services from SQL statements) or the BIND SERVICE command (which can be issued from a batch job). After that, a client application program can invoke the service by way of a REST request.

Quite often, a REST request that invokes a Db2 SQL statement will have one or more input values associated with it. These inputs provide substitution values for parameter markers or host variables in the REST-enabled statement. For example, suppose a REST service is created from the following SELECT statement to return to a client program some information about an account holder:


The parameter marker in the green-highlighted predicate needs a substitution value at execution time, and the service-invoking client provides that input value in a JSON document that accompanies the REST request:

Body: { “ID”: 123456789 }

All well and good, but what about a query that has a predicate that would have a varying number of substitution values from one execution to another? An IN-list predicate is often of this type. Consider the following query:


You could REST-enable that query, and a client program might invoke the REST service with this input JSON document (note that when you use positional parameter marker syntax in your query, as seen in the IN-list predicate shown above, REST will assign JSON parameter names as Px, where "x" is the parameter position - 1, 2, 3, etc):

"P1" : "CUSTOMER",
"P2" : "DEPT"

Now, what if you want to run that query again, but this time you want to return rows that have one of three different values in the NAME column of SYSIBM.SYSTABLES (e.g., 'CUSTOMER', 'DEPT' or 'EMP'), versus just looking for matches for two NAME values ('CUSTOMER' or 'DEPT', as shown above)? Oops. You can't send in three input values for a query that's coded with only two parameter markers. Here's the main issue: a Db2 SQL statement that is REST-enabled is not a statement that is dynamically constructed at invocation time. It is, rather, a pre-coded, server-side statement - what a Db2 DBA would call a static SQL statement (that being the case, what's really done via Db2's REST interface is REST enablement of a Db2 package, which you can think of as the compiled and executable form of a static SQL statement).

How, then, do you deal with the possibility that a REST-enabled query with an IN-list predicate might be invoked at different times with varying numbers of IN-list input values, knowing that the pre-coded query has a fixed number of parameter markers in its IN-list predicate?

The best way to deal with this situation might be to code the query (the Db2 server-side query that will be REST-enabled) with what you think will be the maximum number of IN-list values that a service-invoking program will provide. Suppose we determine that for the example query used in this blog entry, that number is six. The query to be REST-enabled, then, would look like this:

WHERE NAME IN (?, ?, ?, ?, ?, ?)

A REST invocation of that query will work just fine when six values for the IN-list predicate are provided in the request's JSON input document; but, what if the query is invoked by a request that provides just three input values for the IN-list predicate? Will the REST invocation of the query work if the JSON input document contains only three values? Nope - the number of provided input values has to equal the number of parameter markers (or host variables, as the case may be) coded in the SQL statement that is being invoked via the REST request. What, then, can be done to successfully execute the query with six parameter markers in its IN-list when the requesting program only has (for example) three IN-list values to pass along with a request for the service?

There are a couple of approaches you could take in that case. One option would be to put the three distinct IN-list values in the input JSON document sent with the request, with the last of those three values repeated to provide the required fixed total number of input values, like this:

"P1" : "CUSTOMER",
"P2" : "DEPT",
"P3" : "EMP",
"P4" : "EMP",
"P5" : "EMP",
"P6" : "EMP"

Another option (feasible if the column referenced in the IN-list predicate is defined as NOT NULL) would be to provide the distinct input values in the JSON input document and then provide NULL to get the number of values up to the fixed requirement, as shown below:

"P1" : "CUSTOMER",
"P2" : "DEPT",
"P3" : "EMP",
"P4" : NULL,
"P5" : NULL,
"P6" : NULL

And there you have it. Takes a little adaptability on the client side, but you can indeed REST-enable a Db2 query that has a fixed number of IN-list predicate parameter markers (or host variables), and then successfully invoke that query with varying numbers of provided distinct values for the IN-list predicate, so long as the number of provided values does not exceed the number of parameter markers (or host variables) coded for the query's IN-list predicate.

I hope that this information will be useful for you. And, I want to give a shout-out here to Tom Toomire of the IBM Db2 for z/OS development team, from whom I have learned a great deal about the REST interface to Db2. Thanks, Tom!

Wednesday, December 29, 2021

Db2 for z/OS Data Sharing: is your Lock Structure the Right Size?

Recently I've encountered some situations in which organizations running Db2 for z/OS in data sharing mode had lock structures that were not sized for maximum benefit. In this blog entry, my aim is to shed some light on lock structure sizing, and to suggest actions that you might take to assess lock structure sizing in your environment and to make appropriate adjustments.

First, as is my custom, I'll provide some background information.

One structure, two purposes

A lot of you are probably familiar with Db2 for z/OS data sharing. That is a technology, leveraging an IBM Z (i.e., mainframe) cluster configuration called a Parallel Sysplex, that allows multiple Db2 subsystems (referred to as members of the data sharing group) to share read/write access to a single instance of a database. Because the members of a Db2 data sharing group can (and typically do) run in several z/OS LPARs (logical partitions) that themselves run (usually) in several different IBM Z servers, Db2 data sharing can provide tremendous scalability (up to 32 Db2 subsystems can be members of a data sharing group) and tremendous availability (the need for planned downtime can be virtually eliminated, and the impact of unplanned outages can be greatly reduced).

One of the things that enables Db2 data sharing technology to work is what's called global locking. The concept is pretty simple: if an application process connected to member DBP1 of a 4-way (for example) Db2 data sharing group changes data on page P1 of table space TS1, a "local" X-lock (the usual kind of Db2 lock associated with a data-change action) on the page keeps other application processes connected to DBP1 from accessing data on the page until the local X-lock is released by way of the application commit that "hardens" the data-change action. All well and good and normal, but what about application processes connected to the other members of the 4-way data sharing group? How do they know that data on page P1 of table space TS1 is not to be accessed until the application process connected to member DBP1 commits its data-changing action? Here's how: the applications connected to the other members of the data sharing group know that they have to wait on a commit by the DBP1-connected application because in addition to the local X-lock on the page in question there is also a global lock on the page, and that global lock is visible to all application processes connected to other members of the data sharing group.

Where does this global X-lock on page P1 of table space TS1 go? It goes in what's called the lock structure of the data sharing group. That structure - one of several that makes Db2 data sharing work, others being the shared communications area and group buffer pools - is located in a shared-memory LPAR called a coupling facility, and the contents of the structure are visible to all members of the data sharing group because all the members are connected to the coupling facility LPAR (and, almost certainly, to at least one other CF LPAR - a Parallel Sysplex will typically have more than one coupling facility LPAR so as to preclude a single-point-of-failure situation).

Here's something kind of interesting: a global lock actually goes to two places in the lock structure (if it's an X-lock, associated with a data-change action, versus an S-lock, which is associated with a data-read request). Those two places are the two parts of the lock structure: the lock table and the lock list:

  • The lock table can be thought of as a super-fast global lock contention detector. How it works: when a global X-lock is requested on a page (or on a row, if the table space in question is defined with row-level locking), a component of the z/OS operating system for the LPAR in which the member Db2 subsystem runs takes the identifier of the resource to be locked (a page, in this example) and runs it through a hashing algorithm. The output of this hashing algorithm relates to a particular entry in the lock table - basically, the hashing algorithm says, "To see if an incompatible global lock is already held by a member Db2 on this resource, check this entry in the lock table." The lock table entry is checked, and in a few microseconds the requesting Db2 member gets its answer - the global lock it wants on the page can be acquired, or it can't (at least not right away - see the description of false contention near the end of this blog entry). This global lock contention check is also performed for S-lock requests that are associated with data-read actions.
  • The lock list is, indeed (in essence), a list of locks - specifically, of currently-held global X-locks, associated with data-change actions. What is this list for? Well, suppose that member DBP1 of a 4-way data sharing group terminates abnormally (i.e., fails - and that could be a result of the Db2 subsystem failing by itself, or terminating abnormally as a result of the associated z/OS LPAR or IBM Z server failing). It's likely that some application processes connected to DBP1 were in the midst of changing data at the time of the subsystem failure, and that means that some data pages (or maybe rows) were X-locked at the time of the failure. Those outstanding X-locks prevent access to data that is in an uncommitted state (because the associated units of work were in-flight at the time of the failure of DBP1), but that blocking of access to uncommitted data is only effective if the other members of the data sharing group are aware of the retained page (or row) X-locks (they are called "retained locks" because they will be held until the failed Db2 subsystem can be restarted to release them - restart of a failed Db2 subsystem is usually automatic and usually completes quite quickly). The other members of the data sharing group are aware of DBP1's retained X-locks thanks to the information in the lock list.

For the lock structure, size matters - but how?

If you're implementing a new data sharing group, 128 MB is often a good initial size for a lock structure in a production environment (assuming that coupling facility LPAR memory is sufficient). Suppose that you have an existing lock structure. Is it the right size? To answer that question, you have to consider the two aforementioned parts of the lock structure: the lock list and the lock table. If the lock list is too small, the effect will often be quite apparent: data-changing programs will fail because they can't get the global X-locks they need, owing to the fact that the lock list is full (the SQL error code in that case would be a -904, indicating "resource unavailable," and the accompanying reason code will be 00C900BF). Certainly, you'd like to make the lock list part of the lock structure larger before it fills up and programs start abending. To stay ahead of the game in this regard, you can look for instances of the Db2 (actually, IRLM) message DXR142E, which shows that the lock list is X% full, with "X" being 80, 90 or 100. Needless to say, 100% full is not good. 90% full will also likely result in at least some program failures. Suffice it to say that if this message is issued on your system, 80% is the only "in-use" value that you want to see, and if you see "80% in-use" you'll want to make the lock list bigger (you can also issue, periodically, the Db2 command -DISPLAY GROUP, or generate and review an RMF Coupling Facility Activity report - in either case, see what percentage of the list entries in the lock structure are in-use).

If you want or need to make the lock list part of a lock structure bigger, how can you do that? Really easily, if the maximum size of the lock structure (indicated by the value of the SIZE specification for the structure in the coupling facility resource management - aka CFRM - policy) is larger than the structure's currently allocated size (as seen in the output of the Db2 -DISPLAY GROUP command, or in an RMF Coupling Facility Activity Report). When that is true, a z/OS SETXCF command can be issued to dynamically increase the lock structure size, and all of the space so added will go towards enlarging the lock list part of the structure. If the lock structure's current size is the same as the maximum size for the structure, lock list enlargement will require a change in the structure's specifications in the CFRM policy, followed by a rebuild of the structure (that rebuild will typically complete in a short time, but it can impact Db2-accessing application processes, so take the action at a time when system activity is at a relatively low level). More information on lock list size adjustments can be found in an entry I posted to this blog back in 2013.

How about the lock table part of the lock structure? Is yours large enough? In contrast to the lock structure, the lock table can't "run out of space" and cause programs to fail. How, then, do you know if a lock table size increase would be advisable? The key here is the "false contention" rate for the data sharing group (or for a member or members of the group - both a Db2 monitor statistics report and an RMF Coupling Facility Activity report can be used to see false contention at either the group or a member level). What is "false contention?" It's global lock contention that is initially perceived but later found to be not real. How does that happen? Recall the previous reference to the lock table and the associated hashing algorithm. A lock table has a certain number of entries, determined by the size of the table and the size of the lock entries therein. The size of a lock table entry will vary according to the number of members in a data sharing group, but for my example of a 4-way group the lock entry size would be 2 bytes. If the lock structure size is 128 MB, the lock table size will be 64 MB (the lock table part of the lock structure will always have a size that is a power of 2). That 64 MB lock table will accommodate a little over 33 million two-byte lock entries. 33 million may sound like a lot, but there are probably way more than 33 million lock-able things in the data sharing group's database (think about the possibility of row-level locking for a 100 million-row table, and you're already way past 33 million lock-able things).

How do you manage global lock contention detection with a 33 million-entry lock table when there are more (probably WAY more) than 33 million lock-able things in the database? That's where the hashing algorithm comes in. That algorithm will cause several different lock-able things to hash to a single entry in the lock table, and that reality makes false contention a possibility. Suppose an application process connected to member DBP1 of the 4-way data sharing group needs a global X-lock on page P1 of table space TS1. DBP1 propagates that request to the lock table, and receives in response an indication that contention is detected. Is it real contention? The z/OS LPARs in the Sysplex can communicate with each other to make that determination, and it may be found that the initial indication of contention was in fact false, meaning that incompatible lock requests (i.e., an X and and X, or an X and an S) for two different resources hashed to the same entry in the lock table. When the initially indicated contention is seen to be false, the application that requested the global lock (the request for which contention was initially indicated) gets the lock and keeps on trucking. That's a good thing, but resolving false contention involves some overhead that you'd like to minimize. Here's where lock table sizing comes in. As a general rule, you like for false contention to account for less than half of the global lock contention in the data sharing group (the total rate of global lock contention and the rate of false contention can both be seen via a Db2 monitor statistics report or an RMF coupling facility activity report).

What if the rate of false contention in your system is higher than you want it to be? In that case, consider doubling the size of the lock table (remember, the lock table size will always be a power of 2), or even quadrupling the lock table size, if the rate of false contention is way higher than you want and the CF LPAR holding the structure has sufficient memory to accommodate the larger lock structure size. If you double the lock table size, you'll double the number of lock entries, and that will mean that fewer lock-able things will hash to the same lock table entry, and THAT should result in a lower false contention rate, boosting the CPU efficiency of the data sharing group's operation.

Here's something you need to keep in mind with regard to increasing the size of a Db2 data sharing group's lock table: this cannot be done without a rebuild of the associated lock structure. As previously mentioned, a lock structure's size can be dynamically increased via a SETXCF command (assuming the structure is not already at its maximum size, per the CFRM policy), but all of that space dynamically added will go towards increasing lock list space, not lock table space. A bigger lock table will require a change in the lock structure's specifications in the CFRM policy, followed by a rebuild of the structure to put the change in effect. If the lock structure's current size is 128 MB, you might change its INITSIZE in the CFRM policy (the initial structure size) to 256 MB, and rebuild the structure to take the lock table size from 64 MB to 128 MB. A lock structure size that's a power of 2 is often reasonable - it will cause lock structure space to be divided 50-50 between the lock table and the lock list. Note that if the lock structure INITSIZE is not a power of two, as a general rule the lock table size will be the power of 2 that will result in the space division between lock table and lock list being as equal as possible.

And that wraps up this overview of Db2 lock structure sizing. I hope that the information will be useful for you.

Monday, November 29, 2021

Db2 for z/OS: -DISPLAY DDF DETAIL output, explained

The output of the Db2 for z/OS command -DISPLAY DDF DETAIL provides a lot of very useful information in a compact form. That said, I've found that quite a number of Db2 people are not so clear on the meaning of various fields in -DISPLAY DDF DETAIL output. In this blog entry I will aim to shed light on key information items provided via -DISPLAY DDF DETAIL, and explain how that information can be put to use.

To begin with, a sample of -DISPLAY DDF DETAIL output is shown below (I have anonymized identifiers such as IP addresses and subsystem names). The fields highlighted in red are those that I have found to be the most useful - they will be subsequently explained.

DSNL102I 32 ::
DSNL102I 32 ::
DSNL106I SESSIDLE = 001440

OK, starting from the top and working down through the sample command output, following are my explanations of the fields highlighted in red:

  • TCPPORT, SECPORT and RESPORT - The first of these fields is the Db2 subsystem's "standard" SQL listener port. A client application (a DRDA requester or a REST client) that will interact with Db2 over a TCP/IP link will specify this port in a connection request (or in a REST request) if there is not a need for SSL encryption for the connection (i.e., encryption of data "on the wire"). If SSL encryption is required, the client specifies the Db2 subsystem's secure port. In the command output shown above, the value of 0 for SECPORT indicates that a secure SQL port has not been defined for the subsystem. That doesn't mean that SSL encryption cannot be used for a connection to the subsystem - that can still happen even when the standard SQL listener port is specified for the connection request. The difference is this: when the secure SQL listener port is specified at connection time then Db2 requires the use of SSL encryption, and rejects the connection request when SSL encryption is not used. Note that with the fix for APAR PH08188 applied (it came out in April of 2019), a Db2 subsystem's one and only SQL listener port can be its secure port - if that is made so then any application that would access the Db2 subsystem via DDF (whether a DRDA requester or a REST client) cannot do so unless SSL encryption is used for the application's connection(s) to the subsystem.
RESPORT is short for re-sync port. What is the purpose of this port? Well, in a DDF two-phase commit situation (i.e., when an application that is DDF-connected to a Db2 subsystem changes data at that location and at another Db2 location in the same unit of work), a failure (application-side or data server-side) could leave one or more associated DDF transactions in an in-doubt state. For those transactions to be resolved the application needs to tell the Db2 subsystem to either commit or abort the transactions, and when the application communicates to the Db2 subsystem the action it is to take for these transactions (commit or abort), it uses the subsystem's re-sync port. This is particularly important in a Db2 data sharing environment, in which all member subsystems have the same SQL listener port (and the same secure port, if applicable). When a DDF transaction is in an in-doubt state in that kind of environment, the associated application needs to communicate the commit or abort action to the particular member subsystem that was servicing the transaction - the member's unique re-sync port makes that possible.

  • CONDBAT and MDBAT - These refer to two parameters, CONDBAT and MAXDBAT, in the Db2 subsystem's ZPARM module (think of the ZPARMs as Db2 for z/OS configuration parameters). CONDBAT specifies the maximum number of connections that DDF-using applications can have with the Db2 subsystem. MAXDBAT is the maximum number of those connections that can be active at one time. It is typical for a subsystem's CONDBAT value to be much higher than its MAXDBAT value. Why is this so? Because at any given time, it's likely that most connections to the Db2 subsystem are not active, and an inactive connection doesn't need a DBAT (DBAT, by the way, is short for database access thread - that's what we call a DDF thread). Having a bunch of inactive connections is no big deal - the virtual storage "footprint" of a Db2 inactive connection is very small, and the CPU cost of switching an inactive connection to active (when a new transaction associated with the connection comes along) and back again (when said transaction has completed) is very small, as well. Generally speaking, you want the CONDBAT value to be high enough to avoid reaching the connection limit, because once that limit has been reached then the next attempt by an application to connect to the subsystem via DDF will be rejected by Db2. A statistics long report generated by your Db2 monitor (depending on the monitor product, that may be called a statistics detail report) will show you if the CONDBAT limit has been reached for a subsystem.
The MAXDBAT value, as stated previously, is the limit on the number of DDF connections that can be concurrently active for the subsystem. How so? Because by definition an active connection is one that is paired with a DBAT (a transaction that comes in by way of a connection needs a DBAT in order to be processed). It's long been thought that you want the MAXDBAT value for a subsystem to be large enough so as not to be reached, and that's still the case for most Db2 systems that I've seen, but there could be a situation in which you WANT the MAXDBAT value to be reached. Suppose, for example, that there are occasional huge surges of transactions that come in from DDF-using applications. It's possible that letting all of those transactions into the system at once would overwhelm the z/OS LPAR's processing capacity, leading to big-time performance problems. In that case, a MAXDBAT value that would be hit would cause some DDF transactions to queue up waiting for a DBAT to come free, but by keeping the system's processing capacity from being overwhelmed, the MAXDBAT limit would preserve a high level of transaction throughput, so a DDF transaction waiting for a DBAT to come free shouldn't have to wait long.

Another MAXDBAT-related matter to consider: high-performance DBATs. A high-performance DBAT, once instantiated, stays dedicated to the instantiating connection until it has been used to process 200 transactions. That means that a larger number of high-performance DBATs leads to fewer DBATs in the DBAT pool, all other things being equal, and THAT means you should almost certainly increase a Db2 subsystem's MAXDBAT value prior to implementing high-performance DBAT functionality.

  • ADBAT - ADBAT is short for "active DBATs," and it indicates the total number of DBATs that exist at the time of the issuance of the -DISPLAY DDF DETAIL command. It indicates the total number of DBATs because ALL DBATs - whether in the DBAT pool or currently paired with connections - are "active."
  • QUEDBAT - This field shows the number of times that the subsystem's MAXDBAT limit has been reached since DDF was last started (which was probably when the Db2 subsystem was last started). See the information about MAXDBAT, above, to get an idea as to how one should interpret a non-zero value in this field.
  • DSCDBAT - This is short for "disconnected DBATs," and it indicates the number of DBATs in the pool. DBATs in the DBAT pool are active, as previously noted, but they are in a disconnected state - a DBAT in the pool is there because it is not tied to a particular connection. When a connection needs a DBAT (for the servicing of a transaction associated with the connection), it gets one from the DBAT pool.
  • INACONN - This is short for "inactive connections." As previously mentioned, it is very common to see a large number of DDF connections in the inactive state - it just means that there are no currently in-flight transactions associated with the connections.
  • IUDBAT - This is short for "in-use" DBATs. It's a relatively new field in the output of -DISPLAY DDF DETAIL, added via the fix for APAR PH30222, which came out about a year ago. In-use DBATs are those that are currently being used to process transactions. A DBAT is also considered to be in-use when it is a high-performance DBAT.
  • PKGREL - This is a DDF configuration setting, and it can be thought of as the high-performance DBAT "on/off switch." When PKGREL is set to COMMIT, as seen in the example command output shown above, high-performance DBAT functionality is "off" for the Db2 subsystem. When PKGREL is set to BNDOPT, high-performance DBAT functionality is "on," in the sense that it is enabled, but for a DBAT to become the high-performance type it has to be used to execute at least one package that has been bound with RELEASE(DEALLOCATE). There is a third PKGREL specification, BNDPOOL, that in my experience is not commonly used - it can be a reasonable choice when an application that uses high-performance DBATs does not stay connected for very long to the Db2 subsystem (typically, a DDF-using application will connect to a Db2 for z/OS server and stay connected to that Db2 system for a long time).

The PKGREL setting can be changed using the Db2 command -MODIFY DDF.


And that's that. I hope that this explanation of -DISPLAY DDF DETAIL output will be useful for you.

Friday, October 29, 2021

Update on No-Charge IBM Tools for Db2 for z/OS

Recently there has been important news regarding no-charge IBM tools for application developers and for database administrators who work with Db2 for z/OS. This news has to do with strategic replacements for the development and administration features of IBM Data Studio and IBM Data Server Manager in a Db2 for z/OS environment. I want to use this blog entry to highlight two newer offerings in the no-charge IBM Db2 for z/OS tools space.

First, some background.

The original (and still there) no-charge IBM tooling for Db2 for z/OS

I've been working with Db2 for z/OS since about 1987 (Version 1 Release 2), and as far back as I can remember there has been no-charge IBM-provided tooling for users of the DBMS. I say "tooling" instead of "tools" because I'm referring here not to separate products but to the tooling that was - and still is - provided with Db2 for z/OS itself. Probably the best known of this "comes with Db2" tooling is the set of capabilities provided by DB2I (short for "Db2 interactive"), the panel-driven feature set that runs in a TSO/ISPF environment and is accessed via a 3270 terminal emulator (or, back in the day, by way of an actual 3270 terminal). DB2I functions that have long been useful to Db2 administrators and to application developers working with Db2 include SPUFI (SQL processor using file input), an interactive means of issuing dynamic SQL statements and reviewing associated output; DCLGEN (declarations generator), for generating structure definitions (for C and PL/I programs) and record descriptions (for COBOL programs) associated with Db2 tables and views that will be referenced in program code; binding and rebinding of plans and packages; a Db2 utility invocation mechanism; and an interface for interactive issuance of Db2 commands (though not commands of the DSN variety). 

To this day I use DB2I regularly, but it was apparent some time ago that a newer generation of developers and administrators wanted new ways of working with Db2 for z/OS, and that some no-charge tooling with basic capabilities was still required in addition to the advanced-function for-fee tools available from IBM and other vendors. An initial response to this "new way, no-charge" need was IBM Data Studio, launched a little over 10 years ago.

No-charge GUI tools: IBM Data Studio, and then IBM Data Server Manager

IBM Data Studio is a download-able, workstation-based tool built on the Eclipse platform. It provided a GUI for Db2 for z/OS that delivered a number of features useful to developers and to administrators. Of particular value to many developers was Data Studio's debug functionality for routines written in SQL Procedure Language, aka SQL PL (examples of such routines are Db2 native SQL procedures, compiled SQL scalar functions and advanced triggers). I personally found Data Studio to be very helpful for testing stored procedure calls, and for retrieving non-traditional Db2 data values such as CLOBs and XML documents (I really appreciated Data Studio's nice formatting of XML documents returned from Db2). Data Studio also evolved to be the GUI front-end for many of IBM's fee-based Db2 for z/OS tools, and for the Db2 Analytics Accelerator, and the resulting look-and-feel commonality was a productivity booster.

As helpful as Data Studio was, the tool came to be more popular with developers than Db2 administrators. In particular, the Eclipse-based interface, familiar to many application programmers, didn't win a lot of fans among DBAs. Over time, it became more clear that another no-charge GUI tool for Db2 for z/OS, more purpose-built for administrators, was needed. That administrator-first tool was IBM Data Server Manager, or DSM. In addition to the admin-friendly interface, DSM had the advantage of browser-based user interaction, simplifying product installation and management.

Even with no-charge tools aimed at the developer and administrator constituencies, it still seemed that we weren't quite where we wanted to be in the Db2 for z/OS space. That changed with the advent of IBM Db2 for z/OS Developer Extension for Visual Studio Code, and IBM Db2 Administration Foundation for z/OS.

A newer no-charge tool for developers: IBM Db2 Developer Extension for VS Code

Visual Studio Code (aka VS Code) is a no-charge and very widely used IDE (integrated development environment), and it's the base for the IBM Db2 for z/OS Developer Extension (actually, one base - the Db2 Developer Extension is also available for Eclipse Theia, and other popular IDEs will likely follow). There are two ways to get the Db2 Developer Extension for VS Code:

  • From the Visual Studio Marketplace on the Web.
  • From within VS Code itself: in a VS Code session, go to the Extension view and type in “ibm z” - the Db2 Developer Extension should be at the top of the resulting list (see the screen shot below).

The Db2 Developer Extension has a lot of features that can be helpful for people writing SQL statements and routines that will target Db2 for z/OS-managed data, including:

  • SQL statement formatting
  • SQL syntax checking (including XML syntax checking)
  • SQL coding accelerators, like code completion and code snippets
  • SQL PL routine debugging (I highlighted this one because it's really important to a lot of people - this functionality was previously only available with Data Studio)
And there's more, and more to come. I encourage you to download and use the Db2 Developer. Extension, and I also recommend checking out the Db2 Developer Extension repository on GitHub - there, you can get lots more information about the Developer Extension, submit requests for enhancements and check out enhancement requests submitted by others (many of which have already been acted upon by the IBM Db2 Developer Extension team).

A newer no-charge tool for administrators: IBM Db2 Administration Foundation for z/OS

The Admin Foundation came out just this past summer. It's a licensed IBM product (the product ID is 5698-ADF), but there is no charge associated with the license (there's an optional service and support charge). Admin Foundation runs entirely in z/OS - it has a browser-based interface that leverages the Zowe virtual desktop.

Key features of IBM Db2 Administration Foundation for z/OS include the following:

  • Object dependency information
  • Generate DDL for an object from information in the Db2 catalog
  • SQL editor
  • Visual explain
  • Single-query tuning services (including statistics advisor)
  • Issue Db2 commands - including DSN commands (I highlighted this one because plenty of GUI tools let you issue SQL statements but not Db2 for z/OS commands, and Admin Foundation does DB2I one better by enabling issuance of Db2 commands of the DSN variety)
The screen shot below shows the object hierarchy information that can be displayed via Db2 Administration Foundation: 

Here's the bottom line: Developer Extension for VS Code and Administration Foundation are great additions to the no-charge IBM Db2 for z/OS tools scene. Check them out, and ride the new wave.