Thursday, December 31, 2020

Db2 for z/OS: What Do You Know About Rebind Phase-In?

For many years, DBAs at a lot of Db2 for z/OS sites have been challenged in performing an important operational task: rebinding a frequently executed package. This can be tricky because a package cannot be rebound while it is in use, and if a package is executed with great frequency - in some environments, certain packages are executed hundreds of times per second during busy periods - then its use count may only rarely go to zero. When a REBIND command is issued for such a package, there are two likely outcomes, neither of which is desirable: either the REBIND command times out because the target package is continuously in-use, or the REBIND gets going thanks to a little gap that emerges in the package's in-use status. Why is that latter outcome potentially undesirable? It is undesirable because once the rebind of a package is underway, a subsequent request to execute the package must wait for the rebind to complete. Yes, a package rebind is usually completed quite quickly, but if rebind processing for a package is delayed for some reason, even if only for a few seconds, that could trigger application-side time-outs - not good.

Could you just rebind a high-use package at a less-busy time? Perhaps, but sometimes there is a pressing need to rebind a package now, not later. Besides that, in an increasingly 24X7 world you might have packages that are rarely not-in-use even outside of peak periods. Think, for example, about the packages associated with the IBM Data Server Driver (which you can think of as the successor to Db2 Connect). These packages, whose default collection is named NULLID, are used by applications (other than REST clients) that access a Db2 for z/OS system via Db2's distributed data facility (DDF). A DDF transaction rate in the thousands per second is not terribly unusual these days, and that rate can remain pretty high around the clock, making rebind of the NULLID packages a daunting prospect for some DBA teams.

The rebind challenges associated with ever-higher and ever more round-the-clock package execution frequency are made more serious by the growing need for package rebind actions. There was a time when an organization might rebind packages (and plans) only infrequently, outside of the rebinds that are recommended when you migrate a system to a new version of Db2 (if your organization does not rebind all plans and packages following migration to a new version of Db2, I recommend that you address that situation). Nowadays, we have newer reasons to rebind packages - you might change a RELEASE specification to DEALLOCATE, perhaps to enable high-performance DBAT functionality; you might rebind a package to get a performance gain after executing RUNSTATS to leverage the statistics profile enhancements introduced with Db2 12; you might rebind a package with the new (with Db2 12) CONCENTRATESTMT option, to "turn on" Db2's statement concentration feature for an application from the server side (statement concentration could formerly only be activated by a client-side action); and so on.

The IBM Db2 for z/OS development team recognized some time ago that rebind concurrency was an issue that needed to be addressed. The bind concurrency feature provided by Db2 11 for z/OS definitely helped, but more was needed. That "more" was largely delivered via the rebind phase-in capability enabled by way of Db2 12 function level 505. Although the fix (for APAR PH09191) that takes a Db2 system's code to the 505 level came out about 18 months ago, I have found that quite a few Db2 for z/OS people are not aware of what rebind phase-in is and how it works; so, I'll provide explanatory information in the remainder of this blog entry.

In a nutshell, rebind phase-in allows a package to be immediately and successfully rebound even if the package is in-use at the time the REBIND PACKAGE command is issued. Not only that, but when rebind phase-in is in effect, a package execution request that comes in following the issuance of the REBIND command will not be held up, even if the rebind of the package has not yet completed. Gotta love that.

So, how is this goodness made possible? Well, it's based on a Db2 capability, called plan management, that was introduced with Db2 9. When plan management is fully in effect (when the PLANMGMT parameter of ZPARM is set to its default value of EXTENDED), Db2 can retain up to three instances (referred to as copies) of a given package: the original copy, the previous copy, and the current copy. These package copies are retained primarily for the purpose of "taking the fear out of rebinding" (as a former colleague of mine cleverly put it) - if you rebind a package and access paths change in a way that negatively impacts performance, you can quickly and easily go back to the previous copy of the package by issuing a REBIND PACKAGE command with the SWITCH(PREVIOUS) option.

The copies of a package retained by way of plan management functioning in extended mode are differentiated by a number: copy 0 is the current instance of a package, copy 1 is the previous instance, and copy 2 is the original instance. Rebind phase-in builds on this functionality, and in fact having PLANMGMT in ZPARM set to EXTENDED is required if you want to take advantage of the rebind phase-in feature. I'll illustrate with an example: suppose that you issue a REBIND command for package ABC, and Db2 (Version 12, with function level 505 or higher activated) sees that the package is currently in use. In that case, Db2 will utilize rebind phase-in processing (if the package is not in-use at the time of the issuance of the REBIND PACKAGE command, the rebind request will be processed in the traditional way). A new instance of package ABC will be generated, even as the current instance of the package is being used, and this will be do-able because the newly-generated instance of the package will be identified as copy 4 (we jump from copy identifier 0 to 4 because - as noted above - 1 identifies the previous instance of the package and 2 identifies the original, and identifier 3 has been reserved for future use by Db2). If a request to execute package ABC comes in before generation of the new copy 4 has completed, copy 0 of the package will be allocated to the requester's Db2 thread for execution. Requests for execution of package ABC that come in after copy 4 has been generated will result in allocation of copy 4. In time (quickly if the package is bound with RELEASE(COMMIT) - a little longer if the package is bound with RELEASE(DEALLOCATE) and is allocated to a persistent thread such as a high-performance DBAT), instances of copy 0 of package ABC will be released from the threads to which they had been allocated for execution, and copy identifier 0 will then go back into the pool of numbers available for package differentiation when package ABC is again involved in rebind phase-in processing. That pool, for any given package, is the number 0 together with the numbers 4-16. When package ABC, the current copy of which is now 4, is again rebound while in-use, the new instance of the package will be identified as number 5. If package ABC is rebound in a phase-in way enough times for its current copy identifier to be 16, and it is again rebound while in-use, the new copy will be identified as 0, after which the numbers 4-16 will again be used as needed.

Note, by the way, that rebind phase-in processing is also used when a REBIND PACKAGE command with a SWITCH specification is issued and the target package is found to be in-use. Let's say that the current instance of package XYZ is copy 4 (meaning that copy 0 was current when package XYZ was rebound in a phase-in way). If a REBIND PACKAGE command with SWITCH(PREVIOUS) is issued for package XYZ while the package is in-use, the previous instance of the package (always designated as copy 1) will be replicated to generate the new current instance of the package, which will be identified as copy 5. New requests to execute package XYZ will result in copy 5 being allocated to the requester's thread. Copy 4 of package XYZ, the previously current copy which is now the phased-out copy, will be replicated to become the new previous instance of the package (copy 1, as always).

Also note that rebind phase-in functionality is not applicable to packages associated with SQL PL routines (native SQL procedures, compiled SQL scalar functions and advanced triggers). This is not as big a deal as it may seem, owing to the fact that there were already ways to non-disruptively generate and activate a new instance of a SQL PL routine. For example, a new instance of a native SQL procedure can be non-disruptively generated and activated via an ALTER PROCEDURE statement with an ADD VERSION specification, followed by a second ALTER PROCEDURE statement with ACTIVATE VERSION specified (referencing the version added via the ALTER PROCEDURE with ADD VERSION).

Rebind phase-in, enabling successful and non-disruptive rebind operations even for packages executed with great frequency, is one more way in which the IBM Db2 for z/OS development team has made life easier for Db2 DBAs. I hope that you will find this functionality to be useful at your site.

Tuesday, November 24, 2020

Db2 for z/OS EXPLAIN: What an Access Path Is, and What it Would Be

The EXPLAIN functionality of Db2 for z/OS, by which the query optimizer's access path determinations are externalized, has been around as long as I can remember, and Db2 people, by and large, are familiar with the concept. In recent weeks, however, I discovered that there is some lack of awareness of EXPLAIN enhancements delivered through more recent versions of Db2 - specifically, Db2 Version 8, Db2 10 and Db2 12. These enhancements, which provide ways to see what a statement's access path is, and what it would be, can be very helpful in certain scenarios. I will describe the newer EXPLAIN capabilities and related use cases in this blog entry.


Db2 Version 8: using EXPLAIN to get "access path is" information for cached dynamic statements

"Db2 Version 8?" you might ask. "That came out 16 years ago. Are you telling me there's something about Db2 V8 that I still might not know?" Yes, that's what I'm telling you. And it's not because you aren't smart. It's because you're a human being, and there's only so much knowledge that one can put into one's brain and be able to pull out, intact, on-demand. Db2 V8, like most Db2 for z/OS versions, was packed with a lot of Very Important New Features, such as 64-bit addressing; table-controlled range-partitioning; a type 4 JDBC driver; and the ability to change a table column's data type (within a family of data types, such as character string or numeric) with an ALTER, versus unload/drop/re-create/re-load - and that's just to name a few of the Db2 V8-introduced capabilities that were a Big Deal. With all that "wow" stuff, no surprise that some of the less-Big - but still really helpful - new things escaped some peoples' notice (or were noticed and later forgotten).

One of those not-huge-but-really-helpful Db2 V8 enhancements concerned EXPLAIN, in the context of the dynamic statement cache. Now, we got dynamic statement caching well before Db2 V8, and it was very important in making Db2 for z/OS an outstanding data server for applications - very often client-server applications, and often provided by vendors - that exclusively (or almost exclusively) issue SQL statements that are dynamically prepared by Db2. Sometimes, though, there could be a performance analysis challenge presented by the dynamic statement cache: what if you needed to know the access path for a dynamic query that had been prepared and cached? Sure, you could take the text of the query and make that input to an EXPLAIN statement, but the output of that EXPLAIN would tell you what the query's access path would be if it were generated right now. That's not what you want, in this case. What you want is the access path that was generated when the query was initially prepared and cached. Well, depending on the size of the statement cache and the frequency of the cached statement's execution (when a statement has to be removed from the cache to make room for another statement, that is done on a least-recently-used, or LRU, basis), initial preparation of the statement might have happened hours or days ago - who's to say that an access path generated for the statement NOW would be what it was THEN?

Db2 V8 addressed that scenario by providing a new EXPLAIN option which can be utilized as shown in the example below (the Db2 V8 enhancement is highlighted in red):

EXPLAIN STMTCACHE STMTID 1287;

Now, plenty of Db2 people are familiar with the STMTCACHE option of EXPLAIN, but only when that option is used with the keyword ALL, as in, EXPLAIN STMTCACHE ALL. Executing that statement - referred to by many as, "snapping the cache" - will cause Db2 to insert, into the DSN_STATEMENT_CACHE_TABLE qualified by your ID, a row for each statement in the dynamic statement cache. Snapping the cache can be a very handy way to get some very useful information about cached dynamic statements. When EXPLAIN STMTCACHE is issued together with STMTID xxxx, where "xxxx" could be an integer value, as shown in my red-highlighted example, above, or a host variable into which a statement ID had been placed, what Db2 provides is the access path information that was generated for the statement associated with the ID (and statement ID is part of what you get in the output of EXPLAIN STMTCACHE ALL) when that statement was initially prepared and placed in the dynamic statement cache. This is so because EXPLAIN STMTCACHE STMTID does not generate EXPLAIN information - it extracts EXPLAIN information. See, when Db2 prepares a dynamic statement and caches the prepared form of the statement in the statement cache, it also stores access path information for the statement in a compressed form, and that information is what's accessed and expanded into human-usable form when EXPLAIN STMTCACHE is issued with the STMTID option.

OK, challenge effectively addressed. This Db2 V8 enhancement laid the groundwork for a similar enhancement, for static SQL statements, that was delivered with Db2 10.


Db2 10: getting "what is" access path information for a package when you neglected to ask for it at BIND (or REBIND) time, and getting "what would be" access path information for a package

First, getting "what is" access path information for statements in a package: you might think, "That's easy enough - just specify EXPLAIN(YES) when you bind (or rebind) the package." Okay... but what if you didn't do that at bind or rebind time (the default value for the EXPLAIN option of BIND and REBIND PACKAGE is NO)? EXPLAIN-ing the statements in the package won't be very helpful, as that will give you "would be now" access path information for the statements, versus the "was at bind (or rebind) time" access path information you want. You're out of luck, right?

No, you're not out of luck. That compact-form access path information I mentioned above in describing EXPLAIN STMTCACHE STMTID? Db2 started making that part of a package's structure with Db2 9, and Db2 10 provided a way to extract the information from a package (for packages generated in a Db2 9 environment or beyond). Again, an example of the new (with Db2 10) EXPLAIN statement syntax, with the new part highlighted in red:

EXPLAIN PACKAGE COLLECTION ’COLLA’ PACKAGE ’PKG1234’ COPY ’CURRENT’;

As with EXPLAIN STMTCACHE STMTID, execution of the statement shown above would extract, not generate, access path information for statements in package PKG1234 in collection COLLA, and what you would get is information about the access paths generated at the time the package was last bound or rebound (absent a COPY specification, such as the COPY 'CURRENT' in the example above, the EXPLAIN PACKAGE statement would give you access path information for all available copies of the package, which could be the current, previous and original copies if PLANMGMT(EXTENDED) functionality is in effect).

Let's turn the package scenario around. Suppose you actually want "what would be" access path information for the package's statements. In other words, you want to know what the access paths for the statements would be if they were generated now. Why would you want to know that? Well, maybe there's been a change in the Db2 environment that would prompt you to rebind a package (maybe many packages) in order to get a performance boost - maybe some new indexes have been defined on tables accessed by statements in the package, or maybe catalog statistics have been enriched so as to get better-performing access paths for some of the package's statements (and those statistics-enrichment actions may have been suggested by Db2 itself), or maybe you've migrated to a new version of Db2. You don't want to issue REBIND package with EXPLAIN(YES), because that would actually cause the package to be rebound, and maybe you don't want to do that before seeing what the access paths for the package's statements would be as a result of the rebind. You could issue EXPLAIN for individual statements of the package, but that could be pretty laborious, especially for a package with quite a few statements. What to do?

Simple: you rebind the package with EXPLAIN(ONLY), an option introduced with Db2 10 (and available for BIND PACKAGE as well as for REBIND PACKAGE commands). When EXPLAIN(ONLY) is specified for a BIND or REBIND PACKAGE command, a new package is not generated by Db2; instead, Db2 only generates EXPLAIN information for the package's statements, and it does that as though you had issued EXPLAIN for the individual statements - that is to say, Db2 goes through standard EXPLAIN processing for the package's statements, showing you what the access paths for those statements would be if they were generated right now. With that information in hand, you can make your decision as to whether or not you want Db2 to actually generate a new instance of the package via a REBIND PACKAGE command.


Db2 12: getting "what is" information for stabilized dynamic statements

Db2 12 introduced an important new feature, called dynamic plan stability, whereby Db2 can persist the prepared form of a cached dynamic SQL statement to a new (with Db2 12) catalog table called SYSIBM.SYSDYNQRY. What that means: you can stabilize the access path for a dynamic query, even across stop/starts of a Db2 subsystem, because if a stabilized query comes into the system and its prepared form is not in the dynamic statement cache (as would be the case if this were the first issuance of the query following a stop and start of the subsystem, with the dynamic statement cache initially being empty after the -START DB2), Db2 will not re-prepare the statement - instead, Db2 will retrieve the prepared form of the statement from the SYSDYNQRY catalog table and load that structure into the dynamic statement cache.

That's great, when you want to avoid the performance instability that can result from reoptimization of a query, but it also would seem to present an access path analysis challenge. We know that access path information is available (via EXPLAIN STMTCACHE STMTID) for a statement in the dynamic statement cache, but it is possible (as just pointed out) that a stabilized dynamic statement is not in the dynamic statement cache. What then? How can we, in that case, obtain access path information for the stabilized query?

No problem. You get that information using this statement (with the new part, as before, highlighted in red):

EXPLAIN STABILIZED DYNAMIC QUERY STMTID 179 COPY 'CURRENT';

Here, the STMTID is not the ID assigned by Db2 to the statement when it is inserted into the dynamic statement cache - it is, rather, the persistent statement ID generated for the statement by Db2 when the statement is stabilized by way of the new (with Db2 12) command -START DYNQUERYCAPTURE (this is the value placed in the SDQ_STMT_ID column of the query's row in the SYSDYNQRY catalog table). Execution of the EXPLAIN statement shown above extracts, from the SYSDYNQRY catalog table (actually, from an associated auxiliary table in a LOB table space), the access path information for the query as it was when the query was initially stabilized (that being the access path information stored with the prepared form of the statement in the dynamic statement cache when the query was last optimized - the information that would be extracted via EXPLAIN STMTCACHE STMTID for a statement currently in the dynamic statement cache).

And there you have it. The access path information you want, Db2 has (when you want "access path is" information) or can generate (when you want "what would be" information). You just need to know how to get it. And now you do.

Thursday, October 29, 2020

Db2 for z/OS CHAR versus VARCHAR, Part 2 - a DBA's Story

Last month, I posted to this blog an entry in which I provided my thoughts on the CHAR (fixed-length) versus the VARCHAR (varying-length) data type for character string columns in Db2 for z/OS tables. Soon after, I received a note from a longtime Db2 for z/OS DBA whom I've known for years. He had read the aforementioned blog post, and that prompted him to share with me a story of a VARCHAR issue with which he was presently dealing. I found it to be a quite interesting story, and I am sharing it by way of this post.

My DBA friend made an interesting discovery while analyzing indexes to see if space savings could be achieved via ALTER with NOT PADDED, for indexes created with a PADDED specification and defined on a key containing at least one varying-length column (more on NOT PADDED and index space savings at the end of this entry). He saw that there was a table column, defined as VARCHAR(50), that appeared to contain 50 bytes of data in every row of the table. Curious, he investigated further and found that, in fact, most of the bytes in most of those column values were trailing blanks - for some of the table's rows, ALL of the bytes in the column were blanks. The use of VARCHAR(50) instead of CHAR(50) was not saving ANY space for this column - in fact, it was causing a little more space to be used versus CHAR(50), owing to the 2-byte offset indicator for the VARCHAR column (assuming the use of reordered row format - with basic row format there would still be a two-byte indicator for each varying-length column, but it would be a length indicator instead of an offset indicator).

Now, a Db2 for z/OS DBA might be inclined to think, on having made such a discovery, "What were those application developers thinking? Why in the world would they pad a value (even a non-existent value) with blanks to completely fill up a VARCHAR column?" I would tell you, in that case, "First of all, chill out. Application developers who write programs that access Db2 for z/OS data are your best friends - don't rush to accuse them of malfeasance. Second, this odd fill-up-the-VARCHAR-column application behavior may not be - in fact very likely is not - the fault of a development team."

"Not the developers' fault?" you might ask. "What do you mean? How could it not be the fault of developers?" Two words: code generator. My DBA friend whose story I'm retelling here discovered that it was a code-generation tool that was padding to-be-inserted values, with blanks, to the maximum length allowed by a VARCHAR column. Not only that, but he reached into his personal memory bank and recalled a very similar situation he had encountered some years ago, when working for another company: similarly, values inserted into VARCHAR columns of Db2 tables were padded with blanks to the maximum allowable length, and similarly, this behavior was traced to the action of a code generation tool, as opposed to code written by application developers.

The DBA named names with regard to the code generation tool used at his site, and the different code generator product used at his former place of work. I won't name those products here, but I recognized the names and a lot of you would, too. My point: you may have, in your Db2 for z/OS environment, the same issue my DBA friend encountered in his. You, too, may have VARCHAR columns in Db2 tables that appear to be as full as they can be with data values, when in fact the columns largely - perhaps mostly - contain blanks (hexadecimal '40' values, if we're talking about EBCDIC-encoded tables).

How could you make this determination? You could do what the DBA at the center of this story did: issue a query similar to the one below, for VARCHAR column C1 of table T1 (and note that the really important information about trailing blanks is provided via the RTRIM built-in Db2 function - the STRIP function was used, as well, to see how prevalent leading blanks were in column values, in addition to trailing blanks):

SELECT
SUM(BIGINT(LENGTH(C1))) AS LENGTH_AS_IS,
SUM(BIGINT(LENGTH(RTRIM(C1)))) AS LENGTH_RTRIM,
SUM(BIGINT(LENGTH(STRIP(C1)))) AS LENGTH_STRIP,
COUNT(*) AS ROW_CT
FROM T1
WITH UR;

That query yielded this result:

LENGTH_AS_IS    LENGTH_RTRIM    LENGTH_STRIP   ROW_CT
---------+---------+---------+---------+---------+-------
1,493,634,404   844,348,683     844,345,206    89,019,782

Wow. Over 43% of the space occupied by values in the table column was filled with trailing blanks that had been added by the code generation tool to actual values as part of insert processing (1,493,634,404 - 844,345,206 = 649,285,721 trailing blanks, and 649,285,721 / 1,493,634,404 = 43.4%). The DBA did some due diligence to verify that data-retrieving programs would continue to function properly if, say, a SELECT of C1 from T1 for a given row returned 'ABCDEFGHIJ' instead of 'ABCDEFGHIJ' followed by 40 blanks. When it was determined that the functionality of data-retrieving programs would not be adversely impacted by the removal of trailing blanks from values in column C1 of table T1, the DBA initiated a process of removing those trailing blanks. Execution of that process has freed up disk space and improved the efficiency of Db2 utilities operating on the table. It has also resulted in leaner indexes.

Indexes? Yep, and it was an initial focus on indexes that led to the discovery of the padded-out column values. The DBA noticed that a number of indexes at his site with keys that included varying-length columns were defined with the PADDED option, so that the values would be padded to their maximum length in index entries. He figured that altering these indexes to NOT PADDED would save disk space and potentially lead to a reduction in the number of levels for the indexes (fewer index levels means fewer GETPAGEs for processes that access table rows using the index, and GETPAGEs are a major factor in the CPU cost of SQL statement execution). When the DBA issued an ALTER with NOT PADDED for a PADDED index that had a varying-length column in its key, and saw that the index did NOT get smaller as a result, he investigated to see why that was so, and that investigation led to the discovery of the VARCHAR column whose values were padded to max length with trailing blanks. Thanks to the aforementioned process that is removing those trailing blanks, the change to NOT PADDED for associated indexes is now yielding positive results.

You might want to do a little investigating at your site, to see if you have Db2 tables with VARCHAR columns that in fact are not varying-length because their values are all padded with trailing blanks to their maximum length (the query, shown above, that my DBA friend used could be helpful to you). If you find one or more such columns, do the due diligence that I mentioned - work with developers to ensure that removing the trailing blanks would do no harm, program functionality-wise. If you get an "all clear" from that due diligence work, get rid of those trailing blanks and watch the tables and associated indexes (if the latter are defined with NOT PADDED) become more space-efficient, and watch application and utility processes that access the tables and indexes become more CPU-efficient. This can be your story, too.

Wednesday, September 30, 2020

Db2 for z/OS Character Columns: Thoughts on CHAR versus VARCHAR

Recently, I received from a Db2 for z/OS DBA a question: for a table column that will hold character string values, are there any recommendations concerning the use of CHAR (fixed-length) versus VARCHAR (variable-length) for the column's data type?

That question has been asked, I think, many times by many different people over the 35+ years since Db2 made the scene. Even so, I find it to be an interesting query to this day. Why? Because various Db2 developments - some relatively recent and some from way back - have caused the answer to change over time. You can read on and see if you agree with me.

Now, it is of course true that in at least one case the question has a very simple answer: if a value to be stored in the column can exceed 255 byes in length, CHAR cannot be the data type because 255 bytes is the maximum length you can specify for a fixed-length Db2 character string column. Longer than that and it has to be VARCHAR, right? Probably right, but maybe not - you could go with CLOB. Do you think that CLOB is the right choice only when the length of a character string value that will be stored in a column might exceed 32,704 bytes - the maximum length of a VARCHAR column? I'd say that that is not always true - there could be a situation in which you'd choose CLOB over VARCHAR even if no value longer than 32,704 bytes will go in the column. Suppose that the maximum length of a character string value that will be stored in a table is "only" 20,000 bytes, but the column will very rarely be referenced in a query's select-list and maybe never (or almost never) referenced in a query predicate. In that case, CLOB could be a good choice for the column's data type. Why? Because CLOB values (unless in-lined, and you wouldn't inline CLOB values that are rarely retrieved and rarely referenced in predicates) are physically stored in an auxiliary table in a LOB table space. The LOB table space could (should) be assigned to a buffer pool other than the one used for the base table's table space, and THAT would allow a lot more of the base table's pages (sans the LOB values) to be cached in a given amount of buffer pool space. You'd also have more rows in each base table page. More pages in memory and more rows per page means fewer GETPAGEs, fewer read I/Os, better CPU efficiency and better throughput. CLOB can be the right choice for a character string column, even when VARCHAR is technically usable.

How about the other end of the column-length spectrum: super-short character strings? Here again, the best choice is often pretty clear, owing to the two bytes of "baggage" that come along with a variable-length character column. For many years, you knew what that two bytes of baggage was: it was the length indicator for the VARCHAR column. These days, it could be a length indicator, or it could be an offset indicator. It will be one or the other depending on whether basic row format is used for the table's table space or partition (in which case it will be a length indicator) or reordered row format is in effect (in which case it is an offset indicator). I'll have more to say about basic versus reordered row format in a moment, but either way you're looking at two extra bytes for a varying-length value. Because of those two extra bytes, going with VARCHAR for a column that will hold hold values with a maximum length of one or two bytes does not make the best use of disk or buffer pool space - for optimal efficiency you should use CHAR(1) or CHAR(2) for a column that will hold values that will be, respectively, one byte or not more than two bytes in length.

Having considered values longer than 255 bytes and shorter than 3 bytes, we are left with the byte-length range 3-255 to ponder. Within that range, you could go with CHAR or VARCHAR. What might influence your decision? Here are some thoughts:

  • What is the variability in length of values that will go in the table column? The greater the variability, the more space that could be saved via use of VARCHAR versus CHAR (though table space compression, which is widely used, offsets much of this advantage of VARCHAR).
  • How important is it to make access to the data values in a column absolutely as efficient as possible? Maximum efficiency of access is generally associated with fixed-length columns, though reordered row format (RRF) makes access to VARCHAR columns more efficient than it used to be. How does reordered row format do that? It does that by physically storing variable-length columns at the end of a table's rows (logical column order is not affected by reordered versus basic row format) and preceding those variable-length columns with a series of 2-byte offset indicators - one for each variable-length column. The offset indicator for a given variable-length column shows where in the row the first byte of that column can be found. This is good for performance because to access a value in a variable-length column, Db2 goes straight to that column's offset indicator (easily done, because Db2 knows which of those 2-byte indicators it is going to, and the offset indicators are preceded by whatever fixed-length columns are in the row). Having gotten efficiently to the right offset indicator, Db2 sees exactly where to go to get to the variable-length column's first byte - boom-boom, done. Contrast that with the basic row format (BRF) situation, in which physical column order is the same as logical order in a row. If Db2 wants to access the fourth variable-length column in a BRF row, it can't get there without checking the length of each of the three variable-length columns ahead of the column of interest. Now, does RRF make access to variable-length columns just as efficient as access to fixed length columns? Not quite, but RRF does reduce the access-efficiency gap between fixed-length and variable-length columns.
  • How important is it to minimize the volume of data written to the Db2 log? Where UPDATE operations are concerned, Db2 tends to log fewer bytes when fixed-length character columns are updated versus updating of values in variable-length columns. That said, as reordered row format reduces the access-efficiency penalty associated with variable-length columns (as explained above), so too does RRF reduce the logging-volume penalty that goes along with such columns. How so? Well, when one fixed-length column in a row is updated, just that column change is logged. When a variable-length column is updated and that update changes the row's length (typically the case), data is logged from the first changed byte to the end of the row. Because RRF physically moves variable-length columns to the end of data rows, there will likely be fewer bytes between a variable-length column and the end of its row, and that means fewer bytes written to the log when that variable-length column is updated in a way that changes the column's (and thus, the row's) length. I have to say here that data compression muddies the water a bit, since with compression in effect an update of even a fixed-length column could change a row's length, leading to more bytes logged due to update activity (though that effect can be mitigated by the fact that data associated with changes to compressed tables is logged in compressed form). Still, when all is said and done, it is likely that maximum efficiency from a log-write perspective would be achieved by use of fixed-length versus variable-length columns, particularly for tables in uncompressed table spaces. You might have - or might want to have - heavily updated tables in uncompressed table spaces when those tables are not too large (the value of compression is a little less compelling for tables that are not so big).
  • Do you want to avoid row-location changes that result from UPDATEs that increase a row's length? When an UPDATE of a variable-length column increases a row's length so that the row will no longer fit in its page, Db2 will relocate the row in the table and will put in the old location a pointer to the row's new location. That's called an indirect reference, and those can have a negative impact on data-access efficiency (REORGs clean those indirect references up). Use of fixed-length character columns tends to result in fewer indirect references for a table (with data compression being a bit of a muddying factor, as it is for log-write volumes, and for the same reason: an update of a fixed-length column in a compressed row can cause that row to get longer). That said, the indirect-reference penalty associated with variable-length columns can be mitigated through use of the PCTFREE FOR UPDATE table space specification that was introduced with Db2 11 for z/OS (I posted an entry on PCTFREE FOR UPDATE to this blog a few years ago).

And with that, I've provided just about everything I can think of on the subject of CHAR versus VARCHAR for character string values. I'd sum it up this way: the CHAR versus VARCHAR choice is most interesting for character string values that will be between 3 and 255 bytes in length. For such columns, there are situations that would favor CHAR versus VARCHAR, and vice versa. Even so, Db2 features and functions such as table space compression, reordered row format, and PCTFREE FOR UPDATE can reduce the negative effects associated with the use of CHAR or VARCHAR in these various situations. My advice to you: take advantage of Db2 functionality, and make the CHAR and VARCHAR decisions that seem reasonable to you given your application and data environment.

Friday, August 28, 2020

Db2 12 for z/OS RPN Table Spaces: New Possibilities for Range-Partitioned Tables

Sometimes, I assume that I've blogged about some Db2 for z/OS-related topic when in fact I have not. RPN table spaces is one such topic. In thinking about something to post this month, I briefly considered RPN table spaces and thought, "Nah. I've probably already blogged on that subject, given that it's a big deal." Lo and behold, I haven't blogged about RPN table spaces. I've spoken of them numerous times in presentations over the past few years, but written nothing about them in this blog (save for a brief mention in a 2017 response to a comment associated with an entry on universal table spaces). Time to rectify that situation.

RPN table spaces (I'll explain the acronym in a moment) were introduced with Db2 12 for z/OS (with function level 500 or higher activated). RPN is essentially a new type of universal range-partitioned table space. Before getting into the details, I'll provide a few examples of gripes and concerns that Db2 DBAs have regarding traditional universal partition-by-range (PBR) table spaces:

  • "I have a table in a PBR table space that is really big and getting bigger. I'm starting to sweat about only being able to put 16 TB of data in the table. Yeah, I could get up to 128 TB of data in the table if I went with a page size of 32K, but we had good reason to go with 4K pages for this table space."
  • "I don't like the fact that my choice for DSSIZE for a PBR table space impacts the maximum number of partitions for the table space. 256G is the DSSIZE we want for a particular PBR table space, but the limit on partitions for the table space would then be 512 - and that only if we go with 32K for the page size (if we go with 4K for the page size and 256G for DSSIZE, I can only have 64 partitions)."
  • "Ugh. I have a big table with 1000 partitions in a PBR table space. DSSIZE is 8G, and that has been just fine for 999 of the partitions. One partition is about to hit the 8G size limit. I'm going to have to go to 16G for DSSIZE for THE WHOLE TABLE SPACE, even though ONLY ONE PARTITION NEEDS THE LARGER DSSIZE VALUE."

The RPN table space feature of Db2 12 addresses all of those issues, and more. It does that by way of a very important change versus traditional PBR table spaces. This change pertains to the numbering of pages in a range-partitioned table space. In a traditional PBR table space, every page of the table space has a unique number. In an RPN table space, we number the pages in partition 1, and then for partition 2 we start over again with regard to numbering pages. For partition 3, same thing - start over again with page numbering. Same for partition 4, partition 5, and so on. What this means: if I have an RPN table space with 1000 partitions, the first page page in each and every partition has the same page number (referring to the number stored in the data page itself). You read that right: in the table space, there will be 1000 first-in-the-partition pages that all have the same page number. There will also be 1000 second-in-the-partition pages that all have the same page number (assuming that each partition has at least 2 pages). Does this sound like something that would lead to chaos? In fact there is no chaos because in an RPN table space it is still very much possible to uniquely identify a page. The twist, versus a traditional PBR table space, is that in an RPN table space a page is uniquely identified by the combination of its page number and its partition number. That page numbering scheme is called relative page numbering, and that gives us the acronym RPN (the type of page numbering used for a traditional PBR table space is called absolute page numbering).

OK, so what can I do with an RPN table space that I can't do with a traditional PBR table space? Plenty:

  • You can put up to 4 petabytes (that's 4096 terabytes) of data into one table in an RPN table space, and that's true for any page size you choose for the table space.
  • Row-capacity-wise, you can put up to 280 trillion rows (yes, that's "trillion," with a "t") into one table in an RPN table space, if you use 4K pages for the table space (if you go with 32K pages, you can put "only" 35 trillion rows into a table in an RPN table space).
  • With an RPN table space, you can have up to 4096 partitions of any allowable size - in other words, the limit on the number of table space partitions does not go down as DSSIZE goes up; AND, the maximum DSSIZE is 1024G (versus 256G for a traditional PBR table space); AND, DSSIZE for an RPN table space can be nG, with "n" being any integer between 1 and 1024 (in other words, "n" no longer has to be a power of 2, as is the case for a traditional PBR table space - if you want to go with a DSSIZE of, for example, 131G for an RPN table space, go for it).
  • Different partitions of one RPN table space can have different DSSIZE specifications. Consider the scenario I described previously, in which a DSSIZE of 8G is fine for all but one partition in a 1000-partition table space. Let's say the one partition about to hit the 8G size limit is partition 327, and a DBA wants to take DSSIZE for that partition to 17G. With an RPN table space (which makes 17G a valid value for DSSIZE, as noted above), the DBA alters only partition 327 to have DSSIZE 17G; and, get this: that ALTER is an immediate change - partition 327 can immediately get larger than its former 8G DSSIZE, without even a REORG of the partition required to put the DSSIZE change into effect. [Note: when an ALTER TABLESPACE statement is issued for an RPN table space, and that ALTER includes a partition-level DSSIZE specification, the DSSIZE value specified must be equal to or greater than the current DSSIZE value for the partition in question. If a smaller DSSIZE value is to be specified, that has to be done at the table space level, and then it's a pending change and an online REORG of the entire table space will be required to put the smaller DSSIZE value in effect for all of the table space's partitions.]
  • The DSSIZE flexibility provided by an RPN table space extends to partitioned indexes defined on the table in the table space: DSSIZE for partitions of such an index can be nG, with "n" being any integer between 1 and 1024, and different DSSIZE values can be used for different partitions of an index. Speaking of indexes, note that an index on a table in an RPN table space will be slightly larger than an index defined on the same table in a traditional PBR table space. That is so because the size of a RID goes from 5 bytes to 7 bytes when data is in an RPN table space versus a traditional PBR table space.

That's some good stuff, eh? Wanna know how you can have RPN table spaces in your Db2 12 system? I'll tell you. First, for a new table space, in the CREATE TABLESPACE statement you can include the option PAGENUM RELATIVE. You can also create an RPN table space without even specifying PAGENUM RELATIVE. How? By setting the new (with Db2 12) ZPARM parameter PAGESET_PAGENUM to RELATIVE. Doing that will make RPN the default for a new universal PBR table space created in the system (whether PAGESET_PAGENUM is set to RELATIVE or to its default value of ABSOLUTE, that specification can be overridden at the individual table space level via use of the PAGENUM option of CREATE TABLESPACE).

How about an existing universal PBR table space of the traditional variety? Can it be changed to an RPN table space? Yep - just ALTER the table space with PAGENUM RELATIVE, and then do an online REORG of the table space to materialize the change (yes, you have to REORG the table space in its entirety to get to RPN, but the good news is that once the conversion is done you should have fewer situations that would necessitate a REORG of the whole table space - that's part of the RPN value proposition).

Can a "classic" range-partitioned table space be converted to RPN ("classic" referring to a table space that is not universal and is associated with a table that uses table-controlled partitioning versus index-controlled partitioning)? Yes. First, you ALTER the table space with a SEGSIZE specification to indicate that you want to convert it from non-universal to universal PBR (a value of 64 for SEGSIZE is generally appropriate). After that, issue a second ALTER for the table space, this time specifying PAGENUM RELATIVE. After both of those ALTER TABLESPACE statements have been executed, a single online REORG will make the table space both universal and RPN.

If I were administering a Db2 12 system, I think I'd want to make extensive use of relative page numbering for my range-partitioned table spaces, for the DSSIZE flexibility (and the ability to immediately increase a partition's maximum size) and for enhanced scalability. I encourage you to put this new Db2 feature to work for the benefit of your organization.

Friday, July 31, 2020

A Case Study - Db2 for z/OS High-Performance DBATs

Over the years, I have posted several entries to this blog that have covered various aspects of Db2 for z/OS high-performance DBATs (examples are entries posted in 2016 and 2013). Over the past couple of weeks, I have been working with a Db2 for z/OS administrator who is engaged in implementing high-performance DBAT functionality for the first time at his site. That implementation effort has involved a good bit of back-and-forth communication between myself and the Db2 administrator, and I saw that it makes for a good case study that touches on many of the issues associated with high-performance DBAT set-up, monitoring and usage. Accordingly, I decided to use this case study as the basis for a blog entry.

OK, so first there is the matter of getting high-performance DBAT functionality in the first place. How do such DBATs come into being (and a DBAT, or database access thread, is the kind of thread used for an application process that accesses Db2 data by way of the distributed data facility, or DDF)? There are two requirements here:
  1. A package or packages bound with RELEASE(DEALLOCATE). The CPU efficiency boost provided by high-performance DBATs is attributable to RELEASE(DEALLOCATE) packages that stay allocated to such a thread, and to parent locks (table space- or partition-level locks) that, once acquired, also stay allocated to the thread, until the thread is terminated after having been reused by 200 transactions (parent locks acquired in the execution of dynamic SQL statements issued through a RELEASE(DEALLOCATE) package are retained until thread termination if dynamic statement caching is in effect). If the same packages would be used for all - or at least most - of the 200 transactions that reuse a high-performance DBAT, and if the same table spaces would be accessed by all - or at least most - of the thread-reusing transactions, what we save is the CPU that would be consumed in releasing and reacquiring for the thread the same packages and table space locks, over and over again when one transaction completes and another reuses the thread (keep in mind that parent locks are almost always of the "intent" variety, and intent locks do not conflict with each other, so we do not worry about retaining them for a thread as transactions reusing a thread come and go).
  2. A DDF PKGREL setting of BNDOPT. The PKGREL parameter of a Db2 subsystem's distributed data facility is the "on/off switch" for high-performance DBATs. The parameter's value is set via execution of a -MODIFY DDF command, and when PKGREL = BNDOPT then the high-performance DBAT switch is "on," and when PKGREL = COMMIT the switch is in the "off" position (another PKGREL setting, BNDPOOL, can be appropriate when application servers disconnect from a Db2 system soon after connecting, versus the more common scenario in which servers connect to Db2 and stay connected for a long time). Why this on/off switch? Because it is possible that a RELEASE(DEALLOCATE) package allocated to a high-performance DBAT could get in the way of a database administration task such as an ALTER of a database object on which the package is dependent. If that situation were to present itself, a Db2 DBA could issue -MODIFY DDF PKGREL(COMMIT), wait for the existing high-performance DBATs to go away (that should not take more than about 4 minutes), perform the database administration function while DDF transactions continue to execute using "regular" DBATs, and then turn high-performance DBAT functionality back on by issuing -MODIFY DDF PKGREL(BNDOPT).

Next question: which package or packages should be bound with RELEASE(DEALLOCATE) for the purpose of getting high-performance DBAT functionality? If you want a certain DDF application to use high-performance DBATs, and that application calls Db2 stored procedures, one option would be to bind the packages of those stored procedures (or at least the ones that are called most frequently) with RELEASE(DEALLOCATE). The DDF application workload at the site I referenced above involves very little in the way of stored procedure calls, so the Db2 administrator decided to get high-performance DBAT functionality by binding the IBM Data Server Driver (formerly Db2 Connect) packages with RELEASE(DEALLOCATE). The default collection for those packages is called NULLID. Did the Db2 administrator bind the packages in the NULLID collection with RELEASE(DEALLOCATE)? No. Why? Because doing that would cause all DDF applications (more specifically, all such applications that are DRDA requesters, as opposed to applications that use the REST interface to DDF) to use high-performance DBATs, and that would likely be a sub-optimal situation. What you want is to utilize high-performance DBATs for selected DDF applications. How do you do that? You do what the Db2 administrator did: you BIND COPY the IBM Data Server Driver packages into another collection, specifying RELEASE(DEALLOCATE) in so doing, and then you point a DDF application for which you want high-performance DBAT usage to the alternate collection of IBM Data Server Driver packages using the Db2 profile tables (this technique is described in an entry I posted to this blog a couple of years ago).

Time for a test! The Db2 administrator verified that PKGREL was set to BNDOPT in a development Db2 environment, and then he got some DDF activity going in that environment, with applications accessing Db2 using the IDs for which profiles had been created to point the applications to the collection in which the IBM Data Server Driver packages had been bound with RELEASE(DEALLOCATE). He used his Db2 monitor to check on high-performance DBAT usage, and saw that there was no high-performance DBAT activity whatsoever on the system - evidenced by the big fat zero in the DDF statistics field labeled by his monitor as "Active DBATs Bound with RELEASE(DEALLOCATE)" (note that different Db2 monitors give this field a slightly different label). What was going on? Why was there no high-performance DBAT activity? There was no such activity because the Db2 administrator had overlooked one thing: after one or more profiles have been defined using the Db2 profile tables, they have to be activated if you want them to be in effect. The Db2 administrator issued the command -START PROFILE to load the contents of the profile tables into memory, thereby activating all profiles for which PROFILE_ENABLED had been set to Y in DSN_PROFILE_TABLE, and after doing that he saw that high-performance DBATs were indeed being used in the development Db2 system (note that profiles can be automatically activated by setting the ZPARM parameter PROFILE_AUTOSTART to YES - the default value for that ZPARM is NO).

OK, on to the production environment. The Db2 administrator BIND COPY-ed the IBM Data Server Driver packages from NULLID into an alternate collection, with a specification of RELEASE(DEALLOCATE); set up the appropriate Db2 profiles and attributes (and activated them!); made sure that the DDF PKGREL setting was BNDOPT (verified in the output of a -DISPLAY DDF DETAIL command); and... saw no use of high-performance DBATs. Huh? What was going on? Why were high-performance DBATs not used? Here's why: when an application connects to a Db2 system via DDF, it picks up its default package collection at connection time, and in the production environment at this site - as at many sites - DDF-using applications connect to Db2 and stay connected for a long time. Those applications were already connected to the Db2 system when the alternate IBM Data Server Driver package collection was established and profiles were set up and activated to point applications to that collection. The applications that the Db2 administrator wanted to direct to the alternate IBM Data Server Driver package collection would not use packages in that collection until they established new connections to the Db2 system. The Db2 administrator knew that the Db2 system was going to be "bounced" (stopped and restarted) during the upcoming weekend, and he figured - rightly - that when applications subsequently reconnected to the Db2 system they would be directed to the alternate IBM Data Server Driver package collection (the one with the packages bound with RELEASE(DEALLOCATE)) thanks to the Db2 profiles he has set up and activated.

One more plot twist. The Db2 Administrator had bound the packages in his alternate IBM Data Server Driver package collection with RELEASE(DEALLOCATE) and with KEEPDYNAMIC(YES). I explained to him that I generally view those two specifications as either/or options, versus both-together options. In other words, I would bind the IBM Data Server Driver packages in an alternate collection with either RELEASE(DEALLOCATE) or KEEPDYNAMIC(YES), and with that choice before me I would go with RELEASE(DEALLOCATE) in a large majority of cases (the packages in the NULLID collection should be bound with RELEASE(COMMIT) and KEEPDYNAMIC(NO)). KEEPDYNAMIC(YES) has some baggage associated with it (as explained in an entry I posted to this blog in 2016 - scroll down near the bottom of that entry), and getting maximum performance benefit from KEEPDYNAMIC(YES) requires programs to be coded with KEEPDYNAMIC(YES) in mind - and most are not - and I really like the on/off switch that we have for high-performance DBATs (provided, as previously mentioned, by the Db2 DDF PKGREL parameter).

The Db2 administrator agreed that the packages in his alternate IBM Data Server Driver package collection should be bound with RELEASE(DEALLOCATE) and with KEEPDYNAMIC(NO), and he wanted to rebind the packages with those specifications but found that he could not do that because the packages were in use (note: the rebind phase-in functionality provided by Db2 12 function level 505 can take care of that situation). This "in-use" situation surprised him, because he had set PKGREL for the Db2 system to COMMIT to turn off high-performance DBAT functionality (he planned to turn high-performance DBAT functionality back on after rebinding the packages in the alternate IBM Data Server Driver package collection), and following the PKGREL change he saw that the number of high-performance DBATs did indeed go to zero for the system. Why, then, were the packages in what he thought of as his high-performance DBAT collection still being used? I told the administrator that PKGREL does NOT affect the package collection that a DDF application uses; rather, it determines whether the RELEASE specification of packages allocated to DBATs for execution will be honored, if that specification is DEALLOCATE. If a DDF application is pointed to collection COLL_XYZ, and the packages in that collection are bound with RELEASE(DEALLOCATE), those packages will be used if PKGREL is set to COMMIT. In that case, however, the packages will be treated by Db2 as though they had been bound with RELEASE(COMMIT). If the PKGREL setting is changed to BNDOPT, the RELEASE(DEALLOCATE) bind option (for which BNDOPT is short) will be honored.

And there you have it. One high-performance DBAT-related case study that touches on so many different aspects of this Db2 for z/OS feature. I hope that the information provided in this blog entry will be useful for you.

Monday, June 29, 2020

The Various Aspects of Db2 for z/OS Data Encryption

Just last week, a person on the Db2 for z/OS team at an organization asked me for some guidance regarding data encryption in a Db2 context. In responding to that request, I noted that Db2 for z/OS data encryption is a multifaceted topic, and I proceeded to describe various categories of Db2 data encryption and the Db2 features and functions that pertain to those categories. Thinking that this information might be of interest to a larger audience, I'll provide it in this blog entry.

The categories of Db2 for z/OS data encryption I will address herein are as follows:
  • Application-transparent encryption of Db2 data "at rest" (i.e., on disk)
  • Application-transparent encryption of Db2 for z/OS data in memory (i.e., in a Db2 buffer pool) as well as on disk
  • Application-aware encryption of data in a column of a Db2 table, on-disk and in-memory
  • Application-transparent encryption of Db2 data "on the wire" (i.e., in-motion on a TCP/IP link between a client application and a Db2 for z/OS server)

Application-transparent encryption of Db2 for z/OS data "at rest" (i.e., on disk)

One way to get this done is via a feature of your disk subsystem, if the disk subsystem that you use with Db2 for z/OS has that feature.

Another way to get this done is through the use of a z/OS feature called, simply, z/OS data set encryption. This z/OS capability can be used with Db2 11 or Db2 12 for z/OS, but it got quite a bit easier to use with Db2 12, at function level M502 or higher (more on that to come). With z/OS data set encryption, data is encrypted automatically when written to disk, and it is decrypted automatically when read from disk. z/OS data set encryption is enabled by associating an encryption key label with a data set (it is of course very important to secure and protect an encryption key, so instead of referencing the key itself we reference its label, which you can think of as the key's "handle"). Here is the difference between a Db2 environment that is at V12 with function level M502 or higher activated, and one that is not (function levels, by the way, are related to the "continuous delivery" mode by which new Db2 12 functionality is provided to users):
  • Db2 12, at function level M502 or higher: A DBA can issue an ALTER TABLE or ALTER STOGROUP statement with a new option, KEY LABEL, to assign an encryption key label to a set of Db2 data sets (for ALTER TABLE. the specified encryption key label will also apply to index and auxiliary table data sets). Function level M502 also provides a new ZPARM parameter, ENCRYPTION_KEYLABEL, through which a key label can be specified for the Db2 catalog and directory data sets, and for archive log data sets (when the latter are written to disk - these will remain encrypted if they are later HSM-migrated to tape). When a key label has been associated with a database object, actual encryption of that object's data can be accomplished via execution of the Db2 online REORG utility (or any other Db2 utility that involves creation of a new data set or data sets, such as LOAD REPLACE or RECOVER or REBUILD INDEX). Note that the KEY LABEL option is valid for CREATE TABLE and CREATE STOGROUP statements, as well as for ALTER statements.
  • Db2 12, at function level M501 or lower, or Db2 11: A key label is associated with a Db2 data set either via a RACF data set profile or by way of an SMS data class specification or through an explicit IDCAMS definition (in the case of a user-managed data set). As noted above, once a key label has been associated with an existing Db2 data set, actual encryption of the data in that data set can be accomplished via execution of a Db2 utility (such as online REORG) that will involve creation of a new data set (or data sets).

One more thing about z/OS data set encryption: for data decryption to occur when an encrypted data set is accessed, the ID of the accessing process has to be RACF-permitted to access the data set's key label. For Db2 data sets, the only IDs that need this RACF permission to access the associated key labels are the IDs of the Db2 database services and system services address spaces (i.e., the Db2 DBM1 and MSTR address spaces). This is so because, when user SMITH issues a SELECT for table T1, z/OS does not perceive that SMITH is accessing the table's data set(s) - it perceives that Db2 is accessing the data set(s).


Application-transparent encryption of Db2 for z/OS data in memory (i.e., in a buffer pool) as well as on disk

This can be accomplished using Db2 edit procedures (EDITPROCs), for encryption of entire rows in tables, or via Db2 field procedures (FIELDPROCs) for encryption of data in individual columns.

Note that CPU overhead for this type of Db2 data encryption will likely be higher than for z/OS data set encryption, because with the latter data is decrypted when read into memory, whereas with EDITPROC- or FIELDPROC-based encryption, data is encrypted in memory and is decrypted every time it is accessed by an application process.


Application-aware encryption of data in a column of a Db2 table, on-disk and in-memory

This has been do-able for some time by way of a built-in Db2 scalar function called ENCRYPT_TDES. Db2 12 function level M505 provided a new built-in scalar function, ENCRYPT_DATAKEY, that is even more robust than ENCRYPT_TDES. Several factors contribute to the stronger column-level data protection provided by ENCRYPT_DATAKEY versus ENCRYPT_TDES:
  • ENCRYPT_DATAKEY uses 256-bit encryption, while ENCRYPT_TDES uses 128-bit encryption.
  • With ENCRYPT_TDES, an application provides a password when data is encrypted, and that same password must be provided when the data is to be decrypted. What if that password becomes known to someone who shouldn't know it? With ENCRYPT_DATAKEY, access to encrypted data requires specification of the right key label, and on top of that the ID of the application process must be RACF-permitted to access that key label. If someone knows the key label that has to be provided to decrypt data encrypted via the ENCRYPT_DATAKEY function, that person will nonetheless be unable to access decrypted data if he or she is using an ID that has not been RACF-permitted to access the key label.
  • ENCRYPT_DATAKEY offers an option whereby duplicate values in a column will have unique encrypted values (in other words, two values in the column that are duplicates when decrypted will be unique when encrypted). This option makes the data encryption even harder to defeat. 

Application-transparent encryption of Db2 data "on the wire" (i.e., in-motion between a client application and a Db2 for z/OS server)

This is done by way of SSL encryption (or, more precisely, via AT-TLS, which is short for application-transparent, transport-layer security). When a client application - either a DRDA requester or a program utilizing the REST interface to Db2 - interfaces with the Db2 distributed data facility (DDF) and specifies the Db2 system's secure SQL listener port, Db2 will require the use of SSL encryption. The client application will then need to present a server certificate (i.e., a digital certificate obtained from the target z/OS system) to the z/OS system for an SSL "handshake." The appropriate AT-TLS policy, defined on the z/OS side, will be applied to enable SSL communication between the client and the Db2 server.

The process of initially setting up SSL encryption between a client process and a Db2 for z/OS system can be a bit involved because several pieces of work have to fit together. There is client-side work to be done: the server certificate from the target z/OS system needs to be obtained and placed properly in a key store database, and the right connection string has to be used (and/or the right information has to be specified in the IBM Data Server Driver's db2dsdriver.cfg file) to request an SSL connection to the Db2 server. There is Db2 work to be done: a secure SQL listener port needs to be specified for the Db2 system. There is z/OS work to be done: an AT-TLS policy has to be defined and implemented by way of a policy agent. The best reference I've seen that covers all these pieces of work is an IBM "red paper" titled, DB2 for z/OS: Configuring TLS/SSL for Secure Client/Server Communications (download-able from http://www.redbooks.ibm.com/abstracts/redp4799.html?Open). If you get the right people in your organization involved, with the right skills and knowledge (client-side, Db2 for z/OS, z/OS Communications Server, RACF), you should be successful in getting SSL communications set up between Db2 for z/OS and network-connected applications.

That concludes this overview of Db2 for z/OS data encryption in its various forms. I hope that this information will be useful for you.

Thursday, May 28, 2020

Db2 12 for z/OS Buffer Pools - Recommendations for PGSTEAL, PGFIX and FRAMESIZE

Not too long ago, I received a note from the leader of a Db2 for z/OS database administration team, asking for my take on a Db2 12 buffer pool configuration set-up that his team had devised. An interesting characteristic of the configuration was the use of the options PGSTEAL(NONE) and FRAMESIZE(2G) for most of the Db2 subsystem's buffer pools. The team's reasoning regarding this choice was as follows:

  • The z/OS LPAR in which the Db2 subsystem runs has a very large real storage resource, and most of the Db2 subsystem's buffer pools are very large, so why not use 2 GB real storage page frames?
  • Most of the Db2 subsystem's buffer pools are so large that the total read I/O rate for each pool is very low. A low read I/O rate for a pool suggests a low rate of buffer stealing. When that is the case then why not use PGSTEAL(NONE) for the pool, since PGSTEAL(NONE) does not preclude buffer stealing when that is necessary (as it likely will be if the pool is not large enough to hold all pages of all objects assigned to the pool)?

I explained to the DBA team leader that the buffer pool configuration he'd shared with me was likely sub-optimal, owing largely to over-use of the PGSTEAL(NONE) option. I provided some buffer pool configuration recommendations for his consideration. Those recommendations are shown below, in "thumbnail" (i.e., highly condensed) form. Following the list of recommendations is the rationale for each one. I hope that this information will be helpful to you for optimizing the buffer pool configuration in your Db2 12 for z/OS environment.


The list

  • If ALL of the pages of ALL the objects assigned to a buffer pool will fit into 90% of the pool's buffers*, use PGSTEAL(NONE), PGFIX(YES) and FRAMESIZE(1M) for the pool (* for a pool that has a VPSIZE greater than or equal to 64000, this recommendation applies if all pages of all objects assigned to the pool will fit into VPSIZE minus 6400 buffers).
  • If it is expected that there will be some buffer stealing for a pool, and the size of the pool is at least 20 GB, use PGSTEAL(LRU) and PGFIX(YES) and FRAMESIZE(2G) for the pool.
  • If it is expected that there will be some buffer stealing for a pool, and the size of the pool is less than 2 GB, use PGSTEAL(LRU) and PGFIX(YES) and FRAMESIZE(1M) for the pool.
  • If it is expected that there will be some buffer stealing for a pool, and the size of the pool is between 2 GB and 20 GB, use PGSTEAL(LRU) and PGFIX(YES) for the pool. For the pool's real storage frame size, use either FRAMESIZE(1M) or FRAMESIZE(2G) - your choice.
  • If it is expected that the rate of buffer stealing for a pool will be non-zero but VERY LOW (maybe less than 10 buffer steal actions per minute), you MIGHT consider using PGSTEAL(FIFO) for the pool.

The rationale

Recommendation: If ALL of the pages of ALL the objects assigned to a buffer pool will fit into 90% of the pool's buffers*, use PGSTEAL(NONE), PGFIX(YES) and FRAMESIZE(1M) for the pool (* for a pool that has a VPSIZE greater than or equal to 64000, this recommendation applies if all pages of all objects assigned to the pool will fit into VPSIZE minus 6400 buffers).

Rationale: Why recommend that PGSTEAL(NONE) be used only for Db2 12 buffer pools for which not more than 90% of the buffers (or not more than VPSIZE minus 6400 buffers, when VPSIZE is greater than 64000) will be filled by pages of objects assigned to the pool, when buffer stealing for such a pool can be done, when required? Here's why: Db2 12 manages a PGSTEAL(NONE) buffer pool differently versus Db2 11. In a Db2 12 environment, when the pages assigned to a PGSTEAL(NONE) buffer pool are read into that pool (as they will be when the object is first accessed after the pool has been allocated), they will effectively be arranged in memory as they are arranged on disk. This is why a PGSTEAL(NONE) pool is also referred to as a "contiguous buffer pool" in a Db2 12 system. The in-memory-as-on-disk arrangement of pages reduces the CPU cost of accessing the pages in the pool, because every access to a page in the contiguous part of a PGSTEAL(NONE) buffer pool (there is also a non-contiguous part, as explained below) is a direct access - Db2 knows exactly where a given page in the contiguous part of the pool is located, without having to deal with the hash and LRU chains that are associated with buffer management in a standard pool. Now, a PGSTEAL(NONE) buffer pool still has to allow for buffer stealing in case that is required. How can you enable buffer stealing and still have pages of table spaces or indexes assigned to the pool arranged in memory as they are arranged on disk?

The fact of the natter is, you can't have it both ways - not for the whole pool, at least. You can't both allow for buffer stealing and maintain an in-memory-as-on-disk arrangement of pages in all of the pool's buffers. Here's how Db2 12 addresses this issue: it preserves an in-memory-as-on-disk arrangement of pages for 90% of the buffers of a PGSTEAL(NONE) pool (or for VPSIZE minus 6400 buffers, when VPSIZE is greater than 64000). That is the contiguous part of the pool. Any buffer stealing that has to happen will involve buffers in the remainder of the pool, which is called the overflow area. The number of buffers in the overflow area will be 10% of the pool's VPSIZE if VPSIZE is 64000 or less, and will be 6400 buffers if the pool's VPSIZE is greater than 64000 (for a very small pool, with a VPSIZE smaller than 500, the number of buffers in the overflow area will be 50). What this means is that the maximum CPU efficiency benefit of a Db2 12 PGSTEAL(NONE) buffer pool is achieved not just when there is no buffer stealing for the pool, but when the pool will not get more than 90% full of pages (or when not more than VPSIZE minus 6400 buffers will be occupied by pages, when the pool's VPSIZE is greater than 64000). Why? Because when Db2 has to put pages in the overflow area of a PGSTEAL(NONE) pool, it is no longer arranging all pages in the pool as they are arranged on disk - pages in the overflow area are not managed that way (they can't be, because that's where buffer stealing, if required, will happen). Suppose that a Db2 12 PGSTEAL(NONE) pool has 10,000 buffers. The maximum CPU efficiency benefit for that pool will be realized if not more than 9000 of the pool's buffers are used (i.e., when the pool's overflow area is not used). Another example: if VPSIZE for a PGSTEAL(NONE) pool is 100000, the maximum CPU efficiency benefit for the pool will be realized if not more than 93,600 of the pool's buffers (100,000 minus 6400) are used. To the extent that the overflow area is used, there will be a slight reduction in CPU efficiency delivered via the pool, because access to pages In the overflow area will not be quite as CPU efficient as access to pages in the contiguous part of the pool (where pages are arranged in memory as they are arranged on disk). If the overflow area fills up and some buffer stealing happens, there will be yet additional reduction in the CPU efficiency delivered by way of the PGSTEAL(NONE) pool. [Note that buffer stealing - if required - in the overflow area of a PGSTEAL(NONE) buffer pool will be handled via the FIFO (first in, first out) algorithm.]

What about the FRAMESIZE(1M) recommendation for a PGSTEAL(NONE) buffer pool in a Db2 12 system? Why not go with FRAMESIZE(2G) if a PGSTEAL(NONE) pool will be at least 2 GB in size? Here's why: in a Db2 12 environment, FRAMESIZE(2G) will not be honored for a PGSTEAL(NONE) pool. The reason for this: if pages are going to be arranged in the contiguous part a PGSTEAL(NONE) pool as they are on disk for maximum efficiency of page access (and that's the case, as noted above), we need to restrict a given real storage page frame in the contiguous part of the pool to pages belonging to one and only one database object (i.e., one table space or index). That being the case, if we honored FRAMESIZE(2G) for a PGSTEAL(NONE) pool, there would be the potential for a large amount of wasted space in some of those very large page frames. If you specify FRAMESIZE(2G) for a Db2 12 PGSTEAL(NONE) buffer pool, the pool will end up being backed by 4K page frames. Large page frames can definitely provide a CPU efficiency benefit for Db2 buffer pools. How can you get large page frames for a PGSTEAL(NONE) pool? By specifying FRAMESIZE(1M) for the pool. If a page frame used for the contiguous part of a Db2 12 PGSTEAL(NONE) buffer pool can hold pages belonging to only one database object (as mentioned), wouldn't we potentially waste space in some 1 MB page frames if we were to use them for a PGSTEAL(NONE) pool? Yes, but some wasted space in some 1 MB frames is generally not going to be a big deal in a z/OS LPAR that has (as is more and more often the case) hundreds of thousands of MB of real storage. [Note, by the way, that a 1 MB page frame used for the overflow area of a Db2 12 PGSTEAL(NONE) buffer pool can hold pages belonging to more than one database object - that is one reason for the drop-off in CPU efficiency for page access we get when the page in question is in the overflow area of a Db2 12 PGSTEAL(NONE) pool versus being in the contiguous part of the pool.]

Recommendation: If it is expected that there will be some buffer stealing for a pool, and the size of the pool is at least 20 GB, use PGSTEAL(LRU) and PGFIX(YES) and FRAMESIZE(2G) for the pool.

Rationale: The CPU efficiency advantage provided by using 2 GB page frames for a buffer pool is generally not very significant (versus the use of 1 MB frames) unless the buffer pool is really large (say, 20 GB or larger in size).

Recommendation: If it is expected that there will be some buffer stealing for a pool, and the size of the pool is less than 2 GB, use PGSTEAL(LRU) and PGFIX(YES) and FRAMESIZE(1M) for the pool.

Rationale: As you might expect, FRAMESIZE(2G) is not going to be honored for a pool that is smaller than 2 GB in size (though we will use a 2 GB frame for a pool if FRAMESIZE(2G) is specified and the pool's size is only a little bit smaller than 2 GB).

Recommendation: If it is expected that there will be some buffer stealing for a pool, and the size of the pool is between 2 GB and 20 GB, use PGSTEAL(LRU) and PGFIX(YES) for the pool. For the pool's real storage frame size, use either FRAMESIZE(1M) or FRAMESIZE(2G) - your choice.

Rationale: For a pool whose size is between 2 GB and 20 GB, use of 2 GB page frames will likely not deliver performance that is significantly better than what you'd see with FRAMESIZE(1M). With that said, use of 2 GB frames for a pool that is smaller than 20 GB will not have a negative impact on performance versus 1 MB frames; so, I say that this is your choice, and you'll be fine either way.

Recommendation: If it is expected that the rate of buffer stealing for a pool will be non-zero but VERY LOW (maybe less than 10 buffer steal actions per minute), you MIGHT consider using PGSTEAL(FIFO) for the pool.

Rationale: I see PGSTEAL(FIFO) as being very much a niche-type specification. Yes, it means that we won't consume CPU cycles in tracking least-recently-used information for buffers, but it could negatively impact performance if the buffer stolen via FIFO (the one that, of all the buffers in the pool, holds the page read into the pool the longest time ago) happens to be frequently referenced - in that case, that buffer will likely have to be read right back into the pool. With this in mind, I think that PGSTEAL(FIFO) might deliver a bit of a performance advantage over PGSTEAL(LRU) if the rate of buffer stealing is a really low non-zero value (like, maybe single digits of buffer steals per minute). If the buffer stealing rate is zero, then as previously mentioned the pool could be a good candidate for PGSTEAL(NONE).

Wednesday, April 29, 2020

Clearing the Air Regarding Db2 12 for z/OS and "Deprecated Objects"

On a number of occasions over the past several months, I have seen that there is some misunderstanding in the Db2 for z/OS community regarding support for certain types of database objects in a Db2 12 environment. Via this blog entry, I want to clear up these misconceptions.

The misunderstandings to which I alluded are, most often, related to traditional segmented table spaces. By "traditional segmented," I mean table spaces that are not simple, not partitioned and not universal (I say "traditional segmented" instead of just "segmented" because universal table spaces are themselves segmented). In its most extreme form, the misconception is expressed in this way (and in the quoted phrase below, "segmented" refers to what I call a "traditional segmented" table space):

"I hear that segmented table spaces are not supported in a Db2 12 environment."

Wrong, wrong, wrong.

Where did this misconception come from? Most likely, from a change introduced with Db2 12 function level V12R1M504 (or just M504, for short). OK, quick shot of background information: you are probably aware that with Version 12, Db2 for z/OS has gone to a "continuous delivery" mechanism for getting new functionality out to users. What does that mean? It means that, several times per year, new functionality for Db2 12 is introduced by way of the product's maintenance stream. In other words, Db2 12 fixes come out, and applying one of these fixes takes the code in a Db2 12 subsystem's load library to a new function level (fixes are tied to what IBM calls APARs, and indeed every Db2 12 function level from M501 upwards is associated with an APAR number, as indicated on the "What's new in Db2 12 function levels" page on IBM's Web site). Does that mean you need to be careful about applying Db2 12 fixes, out of concern that you might inadvertently take your system's Db2 code to a function level you're not ready to support? No, you don't need to worry about that. The reason: applying a fix that takes a Db2 12 subsystem's code to a new level doesn't change anything with regard to the subsystem's functionality. A new code level's functionality is activated only when - at a time of your choosing - someone issues an -ACTIVATE FUNCTION LEVEL command that specifies the function level in question.

Back to function level M504. A good bit of the information that went out about that function level - in presentations and such - made mention of "deprecated objects." That, in turn, caused a lot of people to think that (in particular) traditional segmented table spaces could not be used in a Db2 12 system with function level M504 (or higher) activated. As mentioned previously, some people even thought that traditional segmented table spaces could not be used at all in a Db2 12 environment, regardless of the activated function level.

Part of this confusion likely results from misinterpretation of that term, "deprecated." "Deprecated" does not mean gone; instead, it means that the deprecated thing (a feature, a ZPARM parameter, a type of database object, whatever) is a) not going to be enhanced going forward, and b) is likely to go away eventually. That "eventually" can be a long time, in some cases. Example: you lost the ability to create so-called simple table spaces (not segmented, not universal, not partitioned) back in the Db2 9 time frame, but existing simple table spaces can still be used, even in a Db2 12 system.

Here's the straight skinny regarding Db2 12 function level M504 and deprecated objects: when SQL DDL statements are issued via a package for which application compatibility level V12R1M504 or higher is in effect, non-universal table spaces cannot be created. [when a package's application compatibility level is V12R1M504 or higher, that also precludes creation of synonyms (we've been telling you for years to use aliases, instead) and hash-organized tables (those haven't been used by many organizations, and their benefits in a modern Db2 system tend to be marginal, at best).] Boom. That's it.

A package's application compatibility level is specified via the package BIND (or REBIND) parameter APPLCOMPAT, about which I wrote in an entry I posted to this blog last year. When you activate Db2 12 function level V12R1M504, does that mean you can no longer create non-universal table spaces (such as traditional segmented table spaces)? No. Remember, what matters here is the APPLCOMPAT value of the package through which a CREATE TABLESPACE statement is issued (that could be, for example, a SPUFI package or a DSNTEP2 package - common avenues for the issuance of Db2 DDL statements). You could have a Db2 12 system on which function level V12R1M506 is activated (that's the lastest function level available as of the posting of this blog entry), but if you're issuing CREATE TABLESPACE statements via a package bound with, for example, APPLCOMPAT(V12R1M503), you can create traditional segmented table spaces. When the package (or packages) through which you typically issue Db2 DDL statements is bound with APPLCOMPAT(V12R1M504) or higher (possible only if function level V12R1M504 or higher has been activated, as the APPLCOMPAT value for a package cannot be higher than the Db2 12 system's currently activated function level), does that mean that the package can no longer be used to create traditional segmented table spaces? No, that is not what it means. If a package (say a DSNTEP2 package) is bound with APPLCOMPAT(V12R1M504) or higher, and you need to use that package to create a traditional segmented table space, you can do that. How? Simple: prior to issuing the CREATE TABLESPACE statement, issue this statement:

SET CURRENT APPLICATION COMPATIBILITY = 'V12R1M503';

Then, issue the CREATE TABLESPACE statement that creates the traditional segmented table space (a dynamic SQL-issuing program can use the SET CURRENT APPLICATION COMPATIBILITY statement to take its application compatibility level to a value below - but not above - the APPLCOMPAT value of the associated package).

And there you have it. Existing traditional segmented table spaces can still be used in a Db2 12 system, and they can be created in a Db2 12 system via a package with an APPLCOMPAT value of V12R1M503 or lower, or via a package with an APPLCOMPAT value of V12R1M504 or higher if, prior to issuance of the CREATE TABLESPACE statement, the package's application compatibility level has been taken below V12R1M504 via a SET CURRENT APPLICATION COMPATIBILITY statement.

Hope this helps.