Thursday, August 31, 2023

Db2 for z/OS: An Important Difference Between Data in Memory and Data on Disk

For the past several years, I've tried to post an entry per month to this blog. Sometimes, it will be very near the end of a month, and I haven't posted anything since the previous month, and I don't have any good ideas for a new entry. Then, I'll have an exchange with someone - could be via email, a phone call, a face-to-face discussion, whatever - and BOOM! Right there I'll find the inspiration for a blog entry. That happened again yesterday - the day before the last day of the month. Whew!

Here's what happened: in an email message, an IT professional recounted to me a situation that had her perplexed. The scenario: 11,000 rows of data were loaded into a table using the IBM Db2 for z/OS LOAD utility. Immediately after the completion of that load job, a program that updated 8 of those just-loaded rows executed and ran successfully to completion. Right after that, an unload job for the table in question was executed. This unload was performed using an IBM software product called Db2 High Performance Unload for z/OS, or HPU, for short (see https://www.ibm.com/docs/en/dhpufz/5.1.0?topic=documentation-db2-high-performance-unload-overview). HPU has two modes of operation: it can perform an unload by operating directly on the VSAM data set(s) associated with the target table, or it can do the unload through Db2, in which case the data is accessed in memory (i.e., in the buffer pool to which the table's table space is assigned). This unload was done in the former of these modes - operating directly on the VSAM data set(s) associated with the table's table space. The result of the unload surprised the person who emailed me. How so? Well, the unload was done using a predicate (the WHERE clause that you might see in a query), and the update program that ran between the load (of the 11,000 rows) and the unload changed values in a way that should have caused 8 of the 11,000 loaded rows to be filtered out by the unload process's predicate (the other 10,992 rows would be qualified by the predicate). The person who emailed me expected 10,992 records in the unload data set, but there were in fact 11,000 rows in that data set. The updates that should have caused 8 rows to be not-qualified by the unload process's predicate were committed before the unload job ran, so why was this update action not reflected in the contents of the unload data set? Consternation increased when another unload of the table, executed a few hours later (again, with the unload process using a predicate and operating directly on the table's associated VSAM data set(s)), generated an unload data set that did contain the expected 10,992 rows.

What in the world was going on here?

Here's what was going on: this all has to do with a big difference between a committed data change (which has relevance for Db2 data-in-memory) and an externalized data change (which relates to Db2 data-on-disk). What's important to know is that Db2 for z/OS does not externalize data changes (i.e., does not write changed data to the associated VSAM data set) as part of commit processing. Instead, database write I/O operations (to externalize data changes to VSAM data sets on disk) are done in a deferred way (and usually asynchronously, at that). This aspect of Db2's operation is critically important to scalability when it comes to data-change operations (e.g., INSERT, UPDATE and DELETE). If Db2 had to write changed pages to disk at commit time, data-change throughput would be majorly impacted in a negative way. In the scenario described above, the first unload generated by HPU (done right after the programmatic update of 8 of the 11,000 rows previously LOAD-ed into the table), operating directly on the table space's underlying VSAM data set(s), did not reflect the post-LOAD update of the 8 rows because the page(s) changed by the updating program were not written to the underlying VSAM data set(s) at commit time. The changed page(s) were externalized later by Db2 via deferred write processing, and that is why the second unload process, also operating directly on the table space's VSAM data set(s), reflected the aforementioned updates of 8 of the 11,000 table rows.

If Db2 deferred write action did eventually get the changed pages (associated with the updating of 8 of the table's rows) written to the associated VSAM data sets on disk - and it did - then what caused that deferred write action to happen? Usually Db2 deferred write operations for a given buffer pool are driven by one of two deferred write thresholds being reached for the pool. The deferred write queue threshold (abbreviated as DWQT) is expressed as a percentage of the total number of buffers allocated for a pool that are occupied by changed-but-not-externalized pages (the default value is 30), and the vertical deferred write queue threshold (VDWQT) is expressed as a percentage of the pool's buffers that are occupied by changed-but-not-externalized pages that belong to a particular data set (the default value is 5). Whenever either of those limits is reached (and it's usually the VDWQT limit), deferred write activity is triggered. The deferred write I/Os, by the way, are generally multi-page in nature (multiple pages written to disk in one I/O operation), and that is good for CPU-efficiency on a per-page basis. The CPU time associated with database writes (usually not much - I/Os don't require many CPU cycles) is charged to the Db2 database services address space (aka the DBM1 address space).

What about synchronous database write actions? Are they also deferred? Yes, they are - they're just triggered by something besides deferred write queue thresholds being reached. In the case of synchronous writes, the trigger is Db2 checkpoint processing. How that works: when Db2 executes a system checkpoint (which it does, by default, every 3 minutes), it notes all pages in each buffer pool that are in changed-but-not-yet-externalized status. When the next system checkpoint rolls around, Db2 checks to see if any of the changed-but-not-yet-externalized pages noted at the last checkpoint have still not been written to disk. If there are any such pages then they will be synchronously written to disk as part of checkpoint processing. Here, "synchronous" means that Db2 will immediately start writing those pages to disk, and it will continue to do that until they are all externalized.

OK, back to the story that prompted this blog entry. Is there a way that the initial HPU unload (the one executed very shortly after the programmatic update of 8 of the 11,000 rows LOAD-ed into the table) could have generated an unload data set with the desired 10,992 rows? Yes. In fact, there were at least two options for getting that done. One option would be to execute the Db2 QUIESCE utility for the table's table space prior to running the HPU unload. This would cause Db2 to write all changed-but-not-yet-externalized pages of the table's table space to disk, and then an HPU unload operating directly on the table space's VSAM data sets would have reflected the update of the 8 rows.

The second option would be to have HPU do the unload through Db2, as opposed to operating directly on the table space's underlying VSAM data sets - this is something that can be done through an HPU keyword. That, in turn, would have caused the HPU unload to be accomplished using data in memory (i.e., in the table space's assigned buffer pool) - any of the to-be-unloaded pages that were not already in memory would have been read into memory to as part of the unload process. This approach would have reflected the programmatic update of the 8 rows because those updates had been committed, and Db2 data-in-memory is always in a transactionally consistent state (any in-memory data that is not transactionally consistent because of an in-flight - that is, not-yet-completed - data change operation is blocked from access by X-type locks, taken at a page or a row level, that are not released until the data changes in question are committed).

Which of these options would you choose? It would depend on what is most important for you. The QUIESCE option would allow the HPU unload to operate directly on the VSAM data set(s) associated with the table space, and that would yield a CPU efficiency benefit, but the QUIESCE itself could be at least somewhat disruptive for applications accessing the target table. The "through Db2" option would not disrupt any application processes that were accessing the table at the time, but it would cause the HPU unload operation to consume some additional CPU time.

By the way, if you're wondering, "If committed data changes are written to disk in a deferred way, how is loss of committed data changes prevented in the event of an abnormal termination (i.e., a "crash") of the Db2 subsystem that happens when there are changed-but-not-yet-externalized pages in memory?" Worry not - data recorded in the Db2 transaction log is used to process those "pending writes" as part of the "roll-forward" phase of Db2 restart processing following a subsystem failure.

One more thing: the mechanics of all this are different in a Db2 data sharing environment (involving group buffer pool writes and associated castout operations to eventually get changed pages written to VSAM data sets on disk), but the net effect is the same.

And there you have it. I'm thankful for your visiting this blog, and I'm thankful for interesting questions that come in when I'm trying to figure out what I'm going to blog about.

Friday, July 28, 2023

Db2 for z/OS: What I Would Say to Application Developers (Part 2)

In the part 1 of this 2-part entry (posted last month), I emphasized what I consider to be job one for a developer coding a Db2 for z/OS-targeted query (write a SQL statement that will retrieve the data your program requires, and don't worry too much about the statement's performance - that's mostly taken care of by Db2), while also noting ways in which a developer can effectively work in partnership with a DBA to enhance the performance of a Db2-based application. In this part 2 entry I will focus on leveraging application-enabling features of Db2 for z/OS.


First, what interface is right for a Db2-accessing application?

For a long time, this was not a very meaningful question, as there was essentially one application interface to Db2 for z/OS: the SQL interface. Sure, there could be discussions around dynamic versus static SQL (i.e., SQL statements prepared by Db2 for execution when initially issued by a program, versus statements that are pre-prepared for execution via a Db2 process known as "bind"), or about the use of "generic" (i.e., non-DBMS-specific) SQL forms such as JDBC and ODBC, but in any case you were talking about SQL, period. That changed with Db2 12 for z/OS, which introduced Db2's REST interface. Using Db2's REST interface does not involve compromising performance, security or scalability, so I'd say use it when it makes sense. When might it make sense? Here are some considerations:

  • Client-side programming language flexibility - The IBM Data Server Driver, which supports the use of SQL forms such as JDBC and ODBC with Db2 for z/OS, can be used with applications written in a number of programming languages (generally speaking, programs coded with embedded static SQL statements don't require a driver), but suppose your team wants to use a language for which the IBM Data Server Driver is not applicable? Well, does the language allow a program to issue a REST request? If the answer to that question is, "Yes" (often the case), programs written in that language can access Db2 via its REST interface.
  • Total abstraction of the particulars of the service-providing system - If you're using a generic SQL form such as JDBC or ODBC, you don't have to know the specifics of the relational database management system being accessed, but you still know that your program is accessing a relational DBMS (or something that through virtualization software appears to be a relational DBMS). Maybe you don't want to have to know that (even if you have strong SQL skills) - you just want to request some service and have it performed as expected by some system, and you don't care a whit about what that system is and how it does what it does. In that case, the REST interface to Db2 looks really good.
  • Separation of programming duties - When your client-side program accesses Db2 using REST requests, your program isn't issuing SQL statements - you're instead coding REST requests that invoke server-side SQL statements that were likely written by someone else. That separation of programming duties - client-side developers code programs that issue REST requests and process any returned results, and server-side developers write the Db2 SQL statements and/or stored procedures that will be REST-invoked - might suit you (and your organization's IT leadership) just fine.
So, think this over for a new application that will access Db2 for z/OS, and make the appropriate choice.


Let Db2 for z/OS do work so you don't have to

There are a number of Db2 features that can provide useful functionality for an application, and when you leverage one of these features that's functionality that you don't have to provide via program code. Here are some of the features in this category, delivered through recent versions of Db2 for z/OS:
  • Temporal data support (introduced with Db2 10 for z/OS) - This capability, through which a time dimension can be added to data in a table, comes in two forms (both can be implemented for a given table, if desired, or one or the other can be used):
    • System-time temporal (also known as row versioning) - The way this works: when a row in table T1 is deleted or updated, the "before" image of that row (the row as it existed prior to the update or delete operation) is automatically inserted by Db2 into the "history" table associated with T1, and Db2 also updates timestamp values in the history table row indicating when that "version" of the row first became the current version (i.e., when the row was first inserted into T1, or when it was updated to that version) and when the row stopped being the current version (i.e., when it was deleted from T1, or when it was updated to a new version). Here's what this means: using pretty simple Db2 temporal query syntax, your application can easily find out from Db2 what the row for a given entity (a bank account, an insurance policy, a customer record, whatever) looked like at a previous point in time. For example, when it comes to insurance claim adjudication, what's important is not what the policy holder's coverage is now - what's important is the coverage that was in effect when the event prompting the claim occurred. That's easy to determine with Db2 row versioning functionality. Also easy is seeing how a given entity's row in a table changed over a period of time, and who made changes to the row.
    • Business-time temporal - This form of Db2 temporal functionality lets you add future data changes to a table with an indication of when that change will go into effect and how long it will be in effect (if not indefinitely). For example, you could through a business-time temporal UPDATE statement indicate (and this will be reflected in the target table) that the price of product XYZ is going to change from $10 to $12 on May 1 of next year. Updates of this nature will not impact programs that, by default, are accessing rows that, from a business perspective, are currently in effect. Having future prices (for example) in a product table provides at least a couple of benefits: 1) it ensures that price changes will actually go into effect when scheduled, and 2) it allows financial analysts to issue queries that will show what revenue and profits will be with prices that will be in effect at a future date.
  • Db2 transparent archiving (introduced with Db2 11 for z/OS) - This feature can be helpful, especially for performance, in this scenario: table T1 has 20 years (for example) of historical transaction data (maybe related to account activity), but the vast majority of accesses to the table target rows that have been added in the past 90 days; further, because the table's row-clustering key is not continuously-ascending and because row inserts far outnumber row deletes, the "popular" rows in the table (the ones not more than 90 days old) are physically separated from each other by ever-larger numbers of "old and cold" rows (the ones rarely accessed by programs). In that case the performance for access to popular rows will get progressively worse, and the cost of administering the table (e.g., backing it up, periodically reorganizing it, etc.) will steadily increase. When Db2 transparent archiving is activated for the table T1 (easily done by a DBA), T1 will end up holding only the most recent 90 days of data (the popular rows), while all of the "old and cold" rows are physically stored in the "archive" table that - in an "under the covers" way - is associated with T1 (result: substantially better performance for access to the popular rows, because they are concentrated in a smaller table); and, for query purposes Db2 makes the base table and its associated archive table appear logically as a single table, so a query referencing only the base table can retrieve archived rows as needed; and, when a row is deleted from the base table (after it's been there for - in this example - 90 days), that row is automatically inserted by Db2 in the associated archive table.
  • Result set pagination (introduced with Db2 12 for z/OS) - Db2 12 made a new clause, OFFSET, available for a SELECT statement. OFFSET, used in combination with the FETCH FIRST n ROWS clause, makes it programmatically easier to return a multi-row result set in "pages" that a user can scroll through.
  • "Piece-wise" DELETE (introduced with Db2 12 for z/OS) - Starting with Db2 12, you can use the FETCH FIRST n ROWS clause with a DELETE statement, and that makes it really easy to write a program that will remove a large number of rows in a table in a series of small units of work (so that your data-purge program will not acquire too many locks at one time - nice for concurrency of access with other programs targeting the table).
  • Newer built-in functions - Db2 12 for z/OS added some nice built-in functions, including:
    • PERCENTILE_CONT and PERCENTILE_DISC - These functions (the former treats values in a column as points in a continuous distribution of values, while the latter treats column values as discrete data values) are useful for writing a query that answers a question such as, "Show me the value that is the 90th percentile with regard to salaries of employees in department A01."
    • LISTAGG - This function makes it easy to have a comma-separated list of values (e.g., last names of employees who have more than 10 years of service with the company, for a given department) as a column of a query result set.
    • HASH_MD5 - With this function, you can use a SQL statement to get an MD5 hash of a value before inserting that value into a table (and there are three related built-in functions associated with other popular hashing algorithms).
  • Application-specific lock timeout value (introduced with Db2 13 for z/OS) - Db2 13 provided the new CURRENT LOCK TIMEOUT special register, through which an application can set a lock timeout value that is different from the default lock timeout value for the Db2 system. Suppose, for example, that the system's default lock timeout value is 30 seconds. Maybe the development team for a mobile app that will drive accesses to a Db2 for z/OS database wants to make sure that a user will never have to look for 30 seconds at a spinning colored wheel on a mobile phone screen if a lock required by the app can't be readily obtained. The application team might decide (probably rightly) that it would be better to have a lock timeout value of 3 seconds for this app, have a condition handler in the Db2-accessing program for a lock timeout error, and in the event of a lock timeout (noticed in 3 seconds, versus 30 seconds) send an "Oops! Something went wrong - please try again" message to the user. I as a user would prefer that to looking at the spinning colored wheel for 30 seconds. Similarly, the development team for a long-running, mission-critical batch application might not want their job to time out unless a lock wait exceeds 10 minutes. Easily done with a SET CURRENT LOCK TIMEOUT statement.
  • SQL Data Insights (introduced with Db2 13 for z/OS) - This feature represents the first embedding of advanced machine learning technology in the Db2 for z/OS "engine." It's easy for a DBA to set up and easy for a developer (or a user utilizing a query tool) to use, because the data science work was done by IBM development teams. SQL Data Insights is made usable in the form of three new built-in Db2 functions (more will be delivered via future Db2 13 function levels): AI_SIMILARITY, AI_SEMANTIC_CLUSTER and AI_ANALOGY. These new functions allow for the asking of what I like to call "fuzzy" queries. Here's one example: suppose the fraud analysis team at an insurance company finally caught someone who had been submitting fraudulent claims (and had been very good at covering his tracks). Among the company's several million other policy holders, who else might be engaging in hard-to-detect fraudulent activity? Via the AI_SIMILARITY built-in function of Db2, you can (using standard query syntax for a built-in function) easily code a query that will do this: "Hey, Db2. Here is the ID of a policy holder. Show me the IDs of the 20 other policy holders who are most like this one." And here's the kicker: in coding that query, you don't have to tell Db2 what you mean by "like this one." Db2 will detect patterns of similarity in the data in the specified table - patterns that a human being might be challenged to discern - and return the rows with the highest "similarity scores" in relation to the policy holder ID provided as input to the function. You can turn that list over to the fraud detection team and say, "Hey, guys. Do a deep-dive analysis of activity for the policy holders associated with these 20 IDs - they are the ones most similar to the fraudster that we recently caught."
And there are more application-enabling Db2 features where these came from. Again, let Db2 do work that you might otherwise have to do programmatically. Not only will that save you time and effort - it's likely that Db2 will get the work done more CPU-efficiently than program code would.


Take advantage of Db2 global variable functionality

Db2 11 for z/OS introduced global variables. Unlike a traditional host variable, which you have to define in your program code, a Db2 global variable is created by a Db2 DBA. Once a global variable has been created, it can be used by an application process (a Db2 DBA just has to grant to the ID of the process the privilege to use the global variable). When an application references a Db2 global variable in a SQL statement, it gets its own instance of that global variable (in other words, if there is a Db2 global variable called GLOBVAR, and application process A puts 'cat' in the global variable and application process B puts 'dog' in the global variable, when the two processes look at the value of GLOBVAR then A will see 'cat' and B will see 'dog'). A Db2 global variable makes it really easy to get a value from a Db2 table and pass it via the global variable to a subsequent SQL statement (when the subsequent SQL statement references the global variable in, say, a query predicate, it is as though the predicate were referencing the value previously placed in the global variable).

Another Db2 global variable use case: a global variable makes it really easy to get a value from a Db2 advanced trigger (that's a trigger that has as part of its definition a SQL procedure language routine - more on SQL PL, below): the trigger just puts the value in a global variable, and when the trigger's processing is done and control is returned to the program that caused the trigger to fire, the program looks in the global variable and, voila - there's the value placed by the trigger.


Leverage SQL procedure language, and manage and deploy Db2 for z/OS SQL routines in an agile way

SQL procedure language (SQL PL) lets you code Db2 routines (stored procedures, user-defined functions and advanced triggers) using only SQL statements. This is do-able thanks to a class of Db2 SQL statements called control statements (a reference to logic flow control). These statements have names such as LOOP, ITERATE, WHILE and GOTO - you get the picture.

SQL PL has become really popular over the past 10-15 years (it was introduced with Db2 9 for z/OS). When it comes to Db2 data processing routines, those written in SQL PL can have both functional and performance advantages over routines written in other languages (autonomous procedures are just one example of a functional benefit of stored procedures written in SQL PL). If you decide to use SQL PL routines for an application, I encourage you to manage and deploy these routines in an agile way. In terms of SQL PL routine management, consider how associated source code will be managed. The source code of a Db2 for z/OS "native SQL procedure" (i.e., a stored procedure written in SQL PL) is the CREATE PROCEDURE statement that defines the stored procedure. How should you manage this source code? The same way you'd manage any other source code - don't get thrown off by the fact that this source code has a CREATE in it. Does your organization use an open-source source code management (SCM) tool such as GitLab? OK, fine: use GitLab to manage the source for your Db2 native SQL procedures - you wouldn't be the first to do that.

How about deployment of Db2 SQL PL routines - especially native SQL procedures? To do that in the most agile way possible, use CREATE OR REPLACE syntax when coding these routines. This is the best fit for a unified DevOps pipeline (i.e., an application deployment pipeline used for all of your organization's applications, regardless of the platform(s) on which application programs will run).


The bottom line

In the minds of the IBM folks who develop Db2 for z/OS, application developers are a tremendously important constituency. There are all kinds of Db2 features and functions that were expressly designed to make life easier for application developers working on programs that will access Db2 data servers. Learn about this stuff and take advantage of it. And work with your organization's Db2 for z/OS DBAs. They can help you leverage Db2's application-enabling capabilities.

Code on!

Friday, June 30, 2023

Db2 for z/OS: What I Would Say to Application Developers (Part 1)

Not long ago, I received a request to deliver a couple of Db2 for z/OS-focused webcasts for an organization's application developers. The person who asked about this initially gave me the impression that the purpose of the webcasts would be to help developers write "efficient SQL." This did not have much appeal for me (as I'll explain below), and I communicated as much back to the requester. Subsequently, this individual rephrased the request, indicating that the aim of the webcasts would be to provide "insights for developers to increase their confidence and skills around Db2 [for z/OS] in both development and problem analysis." "OK," I thought to myself, "Now we're talking." This ask gave me an opportunity to think about what I'd like to say to people who write (or might write in the future) application programs that involve accessing Db2 for z/OS-managed data. I'll share these thoughts of mine in a two-part blog entry. In this first part I'll talk about application performance - but maybe not in the way you'd expect. In the part two entry, which I'll likely post in the next 2-3 weeks, I'll focus on application enablement from a Db2 for z/OS perspective.

OK, why will a request to talk to developers about "writing efficient SQL" generally rub me the wrong way? Two reasons: first, something I heard a few years ago. I was at a big Db2-related conference, sitting in the audience for a session delivered by the person who was at that time the leader of the optimizer team in the IBM Db2 for z/OS development organization (Db2's optimizer parses a query and generates for that query the access plan that it estimates will produce the requested result set at the lowest cost and with the best performance). The presenter said (in words to this effect, and with emphasis added by me), "As the leader of the Db2 for z/OS optimizer team, my message for application developers is this: job one for you is to write the query that will retrieve the data that your program needs. If that query could be written differently so as to retrieve the same result faster, we'll take care of that." What he was specifically referring to is the Db2 optimizer's ability to re-write a query under the covers so that the same result will be generated faster (more on that re-write capability momentarily). That statement by the optimizer team leader made a huge impression on me, and I think his words were absolutely spot-on.

I feel that it's very important for an application developer, when writing SQL targeting a Db2 for z/OS database, to focus on a query's objective, versus its form. Why? For one thing, job one really is to get the right data. If a query returns incorrect or incomplete data to a program, who cares if the query runs quickly? A bad result that is returned in a short time is still a bad result. Nothing is more important than retrieving the data that a program requires. Secondly, I believe it's very important for a developer writing Db2-targeting SQL to not have to think about the fact that the target DBMS is Db2 for z/OS. All you as an application developer should really have to think about is that the target DBMS is relational in nature. If you have to stop and think, "Oh, let's see - the data I'm going after is in a Db2 for z/OS database. That means I have to do X, Y and Z in order to get good performance," that's going to negatively impact your productivity, assuming that you're also called on to write SQL that targets other relational DBMSs. As far as I'm concerned, when Db2 for z/OS is the target DBMS you should just think, "relational DBMS," and go from there.

Here's another reason that a request to "tell developers how to write efficient Db2 for z/OS SQL" raises my hackles: too many Db2 for z/OS DBAs, in my opinion, just assume that the average application developer writes inefficient SQL. It's kind of like complaining about the food at college just because it's college food, regardless of whether or not it's actually tasty. That's not a helpful attitude. I've advised Db2 for z/OS DBAs that they should think of themselves as partners with developers when it comes to getting new applications and new application functionality into production. Similarly, I would advise developers to be partners with Db2 for z/OS DBAs when it comes to analyzing and addressing performance issues related to Db2 for z/OS-accessing applications.

How can a developer be a partner when it comes to taking action to resolve performance issues related to Db2 for z/OS-targeted queries? Some thoughts on that matter:

  • Learn some of the lingo. Sometimes, Db2 for z/OS DBAs will say things like, "This SQL statement isn't performing well because it has this stage 2 predicate?" Huh? OK, here's what that means: predicates (the result set row-qualifying parts of a query, such as WHERE ACCOUNT_NUM = 1234) in Db2 for z/OS SQL statements can be either stage 1 or stage 2 in nature. These terms refer, respectively, to two components of Db2 for z/OS: the data manager (stage 1) and the relational data system (stage 2). A stage 1 predicate can be evaluated by the Db2 data manager, while a stage 2 predicate has to be processed by the Db2 relational data system. Stage 2 predicates generally require more CPU time for processing than stage 1 predicates; furthermore, stage 1 predicates are usually index-able, while stage 2 predicates are almost never index-able. That index-able versus non-index-able characteristic of a query might result in a requirement for a table space scan when the query is processed, and that could really slow down query execution, especially when the table in question is really large. On the other hand, a stage 2 predicate in a query might not be a big deal if the query has another predicate or predicates that are highly filtering (i.e., that are evaluated as "true" for only a small number of a table's rows) and index-able.

If you're interested, you can read about stage 1 and stage 2 and index-able and non-index-able predicates on this page of the Db2 for z/OS online documentation. Do you need to sweat a lot about stage 1 versus stage 2 predicates when writing SQL statements for an application? I'd say, not necessarily. Remember that job one is to write a query that returns the data that your program needs. On top of that, Db2 for z/OS, especially over the most recent several versions, has gained more and more query re-write capabilities (as I mentioned previously). Suppose, for example, that you need to get from a Db2 for z/OS table all customers whose accounts were opened in 2010, and you write a predicate like this one to get those rows: 

WHERE YEAR(DATE_OPENED) = 2010

That predicate is stage 2 and non-index-able; however, Db2 for z/OS, in preparing your query for execution, can automatically re-write that predicate in this form, which is stage 1 and index-able:

WHERE DATE_OPENED BETWEEN '2010-01-01' AND '2010-12-31'

  • Know something about what's possible for improving a query's performance. If a query processed by Db2 for z/OS is not performing as it needs to, re-writing the query in some way is one possible solution, but there may be other performance-boosting actions that could be taken instead. One possible solution could be a Db2 for z/OS index created on an expression - something do-able since Db2 Version 9 (as of the date of this blog post, the current Db2 for z/OS version is 13). Suppose, for example, that your program needs rows selected from a Db2 for z/OS table based on an upper-case comparison of values in column COL1 with a provided character string constant. Your query might have a predicate that looks like this:

WHERE UPPER(COL1, 'EN_US') = 'ABCDE'

That predicate is stage 2 and non-index-able; however, it will become index-able if an index is defined on that expression, as shown below (assume that COL1 is a column of table T1):

CREATE INDEX UPPER_VAL ON T1
(UPPER(COL1, 'EN_US'))

Another possible query tuning action is to provide the Db2 optimizer with the catalog statistics that it needs to choose a better-performing access plan for the query. In fact, colleagues of mine who are part of the team in IBM Support that works on cases (i.e., problem tickets) related to Db2 for z/OS query performance have said that the large majority of query performance issues on which they work are ultimately resolved in this manner. Here's the deal: the primary input to Db2 for z/OS access path selection is statistics about objects related to the query - e.g., tables, and indexes on those tables, and columns in tables - that are periodically collected (often by a Db2 for z/OS utility called RUNSTATS) and stored in the Db2 catalog. The richer and more complete the catalog statistics are, the better the Db2 optimizer can do in generating a well-performing access plan for a query. The tricky part is this: what statistics should be gathered for tables, columns, indexes, etc. to enable the optimizer to choose a well-performing access path for a particular query? Would histogram statistics on a given column help? How about frequent-value percentage information for a column? How about correlation statistics for a certain pair of table columns? Telling RUNSTATS to generate every possible statistic on everything would make that utility too costly to execute, so the utility is often executed with a specification that generates what you might call a good "base" of statistics (TABLE(ALL) INDEX(ALL) is typically the specification used for this purpose). How do you know when the optimizer needs additional statistics - and which additional statistics - in order to generate a well-performing access plan for a query that is currently performing poorly? Fortunately, starting with Db2 12 for z/OS the optimizer answers this question for you in the form of statistics profiles, as described in an entry I posted to this blog a few years ago. I'd say, if a query you wrote is not performing as it needs to then before trying to re-write the query or asking a DBA to add or alter an index to address the situation, see if Db2 has inserted a statistics profile in the SYSTABLES_PROFILES catalog table for one or more of the tables accessed by your query (as described in the aforementioned blog entry). If there is such a statistics profile or profiles, work with a DBA to get RUNSTATS executed using the profile(s) and then let Db2 re-optimize the query using the statistics added to the catalog by that RUNSTATS job (for a so-called static query, re-optimization would be accomplished via a REBIND of the associated Db2 package; for a query that is dynamic in the Db2 sense of that word, re-optimization is triggered by invalidating the previous prepared form of the query in Db2's dynamic statement cache). In plenty of cases this will resolve a query performance issue.

  • Know something about EXPLAIN. EXPLAIN is a Db2 SQL statement (also an option of the BIND and REBIND PACKAGE commands, for static SQL statements) through which you can get information about the access path selected by the optimizer for a query (you can read about the EXPLAIN statement in the online Db2 for z/OS documentation). EXPLAIN-generated access path information, in its traditional form, is written to the EXPLAIN tables (these are Db2 tables, as you might expect). The most important of these tables is the one called PLAN_TABLE. Information in this table shows, among other things, the order in which the parts of a query are executed (for example, the order of table access when a statement involves a multi-table join), how data in a table is accessed (e.g., through an index and, if so, which index), the number of columns in an index key that are a match for columns referenced in a query predicate (MATCHCOLS - a higher number is generally a good thing), and the type of join method used when tables are joined (e.g., nested loop or merge scan). If you know something about the information in PLAN_TABLE, you'll be better equipped to partner with a Db2 for z/OS DBA to see how execution of a query that is not performing as desired could potentially be sped up.
A lot of veteran Db2 for z/OS DBAs are very familiar with EXPLAIN information in its PLAN_TABLE form. If you're interested in viewing EXPLAIN information in a different form, consider using the Visual Explain feature of the IBM Db2 for z/OS Developer Extension for Visual Studio Code (a no-charge IBM software tool designed to facilitate development of applications that access Db2 for z/OS data). Visual Explain (as the feature's name implies) provides a visual representation of the access path selected by the Db2 optimizer for a query; and, it's not just pretty pictures - hovering over or clicking on a part of the displayed access path provides very useful related information. Among other things, you can see the number of result set rows that Db2 thinks there will be following execution of that part of the access plan. For a query that is not performing as desired, you might see such information and think, "That's not right. After accessing that table the result set should have way more (or way fewer) rows than indicated by this estimate." The implication here is that you know something about the data that Db2 doesn't know (thus the Db2 optimizer's off-the-mark estimate concerning refinement of the result set as the query's access plan is processed). That, in turn, could suggest that catalog statistics might need to be augmented to provide Db2 with a clearer view of the characteristics of data in a target table (as mentioned in the reference, above, to Db2's SYSTABLES_PROFILES catalog table); or, that might prompt you to consider adding or modifying a query predicate to provide Db2 with a different form of the data request - one that might generate the required result set more quickly.

The bottom line is this: while your primary focus in writing Db2 for z/OS SQL statements should be on retrieving the data your program requires (and I'm focusing on queries because INSERT, UPDATE and DELETE operations are usually more straightforward from a performance perspective), there could be a situation in which a query you coded needs to execute with greater speed and efficiency. Query performance tuning is something to which many Db2 for z/OS DBAs are accustomed, but success in that endeavor can be accelerated and made more likely when developers and DBAs work on this as a team. Don't worry about knowing as much about Db2 for z/OS as your DBA teammate - that's not your job; but, realize that your understanding of your application's data requirements, and of the data the application is accessing in Db2, can be a big help when it comes to tuning a query's performance. It's definitely a case in which 1 (your specialized knowledge as a developer) plus 1 (the DBA's specialized knowledge of Db2 for z/OS) is greater than 2.

In the part 2 of this blog entry I'll have some things to say about application enablement in a Db2 for z/OS context - that is, about ways that you can leverage Db2 functionality to get more feature-rich applications developed more quickly.

Thursday, May 25, 2023

OUTBUFF: A Db2 for z/OS ZPARM You Really Ought to Check

Over the past year or so, I've seen more and more situations in which a too-small Db2 for z/OS log output buffer is negatively impacting system and application performance. The Db2 development team took aggressive action to remedy that situation via a change to the default value of OUTBUFF (the relevant ZPARM parameter) with Db2 13, but if you are not yet on Db2 13 you should make this change yourself in your Db2 12 environment (and, if you are on Db2 13, you should definitely be using the new OUTBUFF default, or an even higher value). In this blog entry I'll provide information that I hope will make all of this clear and meaningful for you.


The Db2 for z/OS log output buffer

Db2, of course, logs changes made to database objects (aside from the situation in which a table is defined with the NOT LOGGED attribute - unusual, in my experience). This is data integrity 101 - data changes have to be logged so that they can be rolled back if a unit of work fails before completing, and so that database objects can be recovered when that is required, and so that a Db2 subsystem can be restarted and restored to a consistent state after an abnormal termination, etc., etc.

Db2 data changes are physically written to the active log data sets, which are made reusable (i.e. made available for further use after being filled) via the log archive process. Prior to being written to the current pair of active log data sets (you ALWAYS want to do dual-logging, at least in a production Db2 environment), data changes are written to the log output buffer in memory. Information in the log output buffer is externalized (i.e., written to the disk subsystem) when the log output buffer is full, and also when a data-changing unit of work commits.

The size of the log output buffer is specified by way of the OUTBUFF parameter in the Db2 ZPARM module (think of the ZPARM module as the configuration parameter settings for a Db2 subsystem). For Db2 12, the default value for OUTBUFF is 4000 KB (that became the default value for OUTBUFF starting with Db2 10 for z/OS). With Db2 13, the OUTBUFF default value was changed to 102400 KB. Yeah, that's a 25X increase (when I communicated that in writing to the Db2 for z/OS team at a certain site recently, the Db2 systems programmer on the team asked me, "Is that a typo?"). Why this major increase in the OUTBUFF default value? Two reasons:

  1. It's eminently do-able for the vast majority of production Db2 subsystems that I have seen. Yes, in a relative sense a 25X increase in a ZPARM parameter's default value may seem to be a really aggressive move, but in absolute terms the increase - about 98 MB - is a drop in the bucket for a z/OS LPAR with a large real storage resource. Many production z/OS LPARs these days are generously configured with memory, because mainframe memory keeps getting cheaper on a per-gigabyte basis, and because leveraging that memory can be very good for system performance. It's increasingly common for production z/OS LPARs to have multiple hundreds of GB - or more - of central storage.
  2. It can be very helpful for Db2 system and application performance, as noted below.
From a performance perspective, a larger Db2 log output buffer has two main benefits. They are...


Larger OUTBUFF benefit 1: reduced application log write wait time

Db2 accounting trace data - specifically, data provided by accounting trace class 3 (which, along with accounting trace classes 1 and 2, is almost always active at all times for a production Db2 subsystem) - records (among other things) the time that Db2-accessing applications wait for log write operations to complete. Average wait-for-log-write-I/O time (available via a Db2 monitor-generated accounting long report) is generally quite small - often less than 1% of total in-Db2 wait time (i.e., class 3 time) for a Db2 application workload. In some cases, however, this can be a significantly larger percentage of in-Db2 wait time. Now, there are multiple factors that can contribute to elevated wait-for-log-write-I/O time, but one of these factors can be a too-small log output buffer. If you see higher levels of wait-for-log-write-I/O time for your Db2 application workload, check the value of the field labeled UNAVAILABLE OUTPUT LOG BUFF (or something similar to that - different Db2 monitor products sometimes label the same field in slightly different ways) in a Db2 monitor-generated statistics long report (the field will be in a section of the report with the heading LOG ACTIVITY, or something similar to that). In my experience the value of this field is usually 0, but if the field has a non-zero value then it could be a good idea to set OUTBUFF to a larger value for the Db2 subsystem in question. Even if the value of UNAVAILABLE OUTPUT LOG BUFF is 0, if your Db2 subsystem has a log output buffer that's on the small side then making it larger via an increase in the OUTBUFF value could help to make Db2 log write operations more efficient, thereby potentially contributing to a decrease in wait-for-log-write-I/O time for your Db2-accessing applications.


Larger OUTBUFF benefit 2: better log read performance

The importance of good Db2 log write performance should be obvious: Db2 is writing to its log all the time, so getting that work done quickly and efficiently is good for any Db2 data-changing application. Can log read performance be important for a Db2-accessing process? YES - and that's especially true for a Db2 data-change-replication process.

It's not unusual for Db2 for z/OS-managed data to be replicated to some other location for some purpose. The data replication tools, from IBM and other vendors, that capture Db2 for z/OS data changes and send them in near-real time to another location generally do their data change capture work by issuing requests to the log manager component of Db2 to retrieve data change information (this is done using a Db2 trace record, IFCID 306, that can be requested synchronously by a process such as a data replication tool). Especially when the volume of changes made to data in a replicated Db2 table is high, you REALLY want the Db2 log manager to be able to retrieve the requested data change information from the log output buffer in memory, versus having to read the information from the log data sets on disk. If the log output buffer is too small, you can see a high percentage of log read requests that require access to the log data sets on disk. The volume of such log data set read I/Os can be very high - like, thousands per second. That chews up CPU cycles and adds to data replication latency (this latency refers to the time between a change being made to data on the source Db2 for z/OS system and that change being reflected in the corresponding data at the replication target location) - both things you'd rather avoid.

How can you check on this? Again, go to a statistics long report generated by your Db2 monitor, and again go to the section under the heading, LOG ACTIVITY (or something similar to that). Check the fields labeled READS SATISFIED-OUTP.BUF(%) and READS SATISFIED-ACTV.LOG(%). What you want to see (what I'd certainly like to see) is a value for READS SATISFIED-OUTP.BUF(%) that is well north of 90, and - conversely - a value for READS SATISFIED-ACTV.LOG(%) that is in the single digits (ideally, low single digits). If you see a lower than desired value for the percentage of log reads satisfied from the log output buffer, make OUTBUFF larger if you can.


Can you make OUTBUFF larger, and if so, how high should you go?

The answer to the first part of that question (assuming that the value of OUTBUFF is not already at the maximum of 400000 KB) depends on the pressure (or lack thereof) on the real storage resource of the z/OS LPAR in which the Db2 subsystem of interest is running. My favorite indicator of real storage constraint is the LPAR's demand paging rate, available from a z/OS monitor-generated summary report. If the LPAR's demand paging rate is less than 1 per second, the real storage resource is not at all constrained, and you have (as far as I'm concerned) a green light for increasing the OUTBUFF value. If the LPAR's demand paging rate is over 1 per second, you might want to see if more memory can be configured for the system, or if some memory can be freed up (perhaps by shrinking a Db2 buffer pool that is larger than it needs to be), prior to making the value of OUTBUFF significantly larger than its existing value.

If the z/OS LPAR's real storage is not constrained (as described above), and you want to make a Db2 subsystem's OUTBUFF value larger, how high should you go? First of all, I would highly recommend setting OUTBUFF at least to the new-with-Db2-13 default value of 102400 KB. Should you go higher than that? Well, I would if the value of READS SATISFIED-OUTP.BUF(%) is less than 90. One thing to keep in mind here: OUTBUFF is not an online-updatable ZPARM. That means you have to recycle a Db2 subsystem (i.e., stop and restart it) in order to put a new OUTBUFF value into effect. In a Db2 data sharing system, that may not be a big deal (application work can continue to process on other members of the data sharing group as the one member is recycled), and even for some standalone Db2 subsystems there are regular opportunities to "bounce" the subsystem. On the other hand, at some sites where Db2 runs in standalone mode there are only a few times per year when a production Db2 subsystem can be recycled. If that's your situation, you might want to consider going to the maximum OUTBUFF value of 400000 KB (again, if the LPAR's memory is not constrained - and it's not if the LPAR's demand paging rate is less than 1 per second).

And that's what I have to say about OUTBUFF. Check yours, and check the relevant information in Db2 monitor-generated accounting and statistics reports to see if an OUTBUFF increase would be good for your system.

Friday, April 28, 2023

Migrating to Db2 13 for z/OS When You Have Old (pre-11.1) Db2 Client Code on Your App Servers

Not long ago, I had a talk with a Db2 for z/OS systems programmer who works at a pretty big site. In a somewhat dramatized form, our conversation went something like this:

Me: "When are you guys going to migrate your production Db2 for z/OS systems to Db2 13?"

Db2 sysprog: "Later than I'd like."

Me: "Why's that?"

Db2 sysprog: "We have some old Db2 client code on some of our application servers."

Me: "So?"

Db2 sysprog: "So, I can't take APPLCOMPAT for our NULLID packages above V12R1M500."

Me: "No prob. Just leave the APPLCOMPAT value for the NULLID packages at V12R1M500, and go ahead and activate function level V12R1M510, and then migrate the systems to Db2 13."

Db2 sysprog: "I can do that?"

Me: "YES."

The very next week, I had a very similar exchange with another Db2 for z/OS administrator at a different site. It seems clear to me that there's some misunderstanding in this area out there, with people thinking that way-old Db2 client code represents a roadblock on the way from Db2 12 for z/OS to Db2 13. NOT TRUE, as I hope to make clear in this blog entry.


Terminology: "Db2 client code"

This term refers to the piece of IBM code that runs on a remote (from the Db2 for z/OS perspective) server that enables an application on that server to be a DRDA requester (DRDA is short for distributed relational database architecture - the protocol used for Db2 distributed database processing). A DRDA requester application is one that sends SQL statements to Db2 by way of a driver such as IBM's JDBC or ODBC driver. Most often, the Db2 client code is the IBM Data Server Driver Package (for which entitlement is related to an organization's license for IBM Db2 Connect). Sometimes, it's something like the IBM Db2 Connect Runtime Client. In any case, the Db2 client code is considered to be part of the Db2 for Linux/UNIX/Windows (LUW) product family, and it will have a version that corresponds to a Db2 for LUW version.


What is "old Db2 client code" in this context?

Short answer: any version prior to 11.1. Now, to explain that short answer: some would say (understandably) that "old code" means out-of-support code. The 11.1 version of Db2 client code is out of support (and has been since April of 2022 - see https://www.ibm.com/support/pages/db2-distributed-end-support-eos-dates). Why, then, do I refer to pre-11.1 Db2 client code as being "old," implying that 11.1 Db2 client code, though out of support, is not "old?" It all has to do with context, and the context in this case is a Db2 12 for z/OS system that is the DRDA server for DRDA requester applications.

APPLCOMPAT is a Db2 for z/OS package bind parameter that specifies the Db2 application compatibility level that will be in effect when the package is executed (for more information about APPLCOMPAT, see the part 1 and part 2 blog entries I posted on that topic a few years ago). With the possibility of a few exceptions, every Db2 for z/OS package will have an APPLCOMPAT value, and that is true for the packages in the package collection called NULLID. NULLID is the "home" collection for the Db2 for z/OS packages that are executed when a DRDA requester application accesses the Db2 for z/OS system.

Here's the crux of the matter at hand: if the APPLCOMPAT value for the NULLID packages is taken above V12R1M500, DRDA requester applications will get an error when trying to connect to the Db2 for z/OS system if they are using pre-11.1 Db2 client code.


Why the preceding sentence does not amount to a Db2 13 migration roadblock

Before you can migrate to Db2 13 for z/OS from Db2 12, you have to activate Db2 12 function level V12R1M510 (the last of the Db2 12 function levels). What the Db2 for z/OS systems programmer I referenced at the beginning of this blog entry thought, and what apparently a number of other Db2 for z/OS people think, is that the APPLCOMPAT value for the NULLID packages (and maybe for other Db2 for z/OS packages, as well) has to be V12R1M510 before you can migrate a Db2 12 system to Db2 13. THAT IS NOT TRUE. Can you have, in a Db2 13 system, packages in the NULLID collection (and in other collections) that have an APPLCOMPAT value of V12R1M500? YES, YOU CAN. In fact, APPLCOMPAT values as low as V10R1 are supported in a Db2 13 environment. So, if your NULLID packages are at APPLCOMPAT(V12R1M500), and old (as defined above) Db2 client code is keeping you from upping that APPLCOMPAT value for your NULLID packages, leave the NULLID packages at APPLCOMPAT(V12R1M500), and activate function level V12R1M510 (when your Db2 code and catalog are at the right level), and then migrate the Db2 12 system to Db2 13. There is NOTHING about having NULLID packages at APPLCOMPAT(V12R1M500) that gets in the way of your doing this.


But what if you really want to take APPLCOMPAT for your NULLID packages to a higher level?

First, why might you want to do this? Best answer, I'd say: because you want developers of DRDA applications in your environment to be able to use the latest Db2 for z/OS SQL syntax and functionality (one particularly noteworthy example: the new built-in AI functions of Db2 13 for z/OS, part of that version's SQL Data Insights feature, which can be used via packages with an APPLCOMPAT value of V13R1M500 or higher). If there's pre-11.1 Db2 client code on some of your application servers, and you really want to take APPLCOMPAT higher than V12R1M500 for your NULLID packages (I would), you have a couple of options:

  • One option: update your Db2 client code. This would be for many people the ideal approach. Get the Db2 client code to the current level, which is 11.5, and you get two benefits: 1) you're actually using Db2 client code that is supported by IBM (always nice), and 2) you can take APPLCOMPAT for your Db2 for z/OS NULLID packages as high as you want. Of course, updating the Db2 client code will likely require working in concert with application server administrators in your organization that can perform the code update.
  • Another option: leave the old Db2 client code out there, and raise the APPLCOMPAT value for your NULLID packages anyway. This might be the required approach, at least in the near term, if your application server administrators are not presently able to help update old versions of Db2 client code within your IT infrastructure. How can you do this without causing connection errors for DRDA requester applications that are using old Db2 client code? You do that with the Db2 profile tables, together with an "alternate" collection for the IBM Data Server Driver / Db2 Connect packages, as explained below.

Creating (and, more importantly, using) an alternate collection for the IBM Data Server Driver / Db2 Connect packages

Step 1 for this approach is to create the alternate collection for the packages whose "home" collection is NULLID. This is pretty easy to do: just BIND COPY the packages in the NULLID collection into a collection with some other name (I'll go with OLD_COLL for this example), and in doing that specify APPLCOMPAT(V12R1M500). DRDA requester applications using pre-11.1 Db2 client code will not get connection errors when they use the packages in that OLD_COLL collection. Ah, but how do you get those applications to use the OLD_COLL collection when they will, by default, be looking to use packages in the NULLID collection (NULLID is the default Db2 for z/OS package collection for DRDA requester applications)? This is where the Db2 profile tables come in.

You can use SYSIBM.DSN_PROFILE_TABLE to identify a component of your DDF workload for which you want Db2 to take some action. The component of the DDF workload of interest here is the DRDA requester applications that are using pre-11.1 Db2 client code. How can you identify that DDF workload component as a profile? Easy: use the PRDID (short for product identifier) column of DSN_PROFILE_TABLE (see https://www.ibm.com/docs/en/db2-for-zos/12?topic=tables-dsn-profile-table). How do you know which product ID(s) to use? You can get that information via output of the Db2 command -DISPLAY LOCATION (see https://www.ibm.com/docs/en/db2-for-zos/12?topic=work-displaying-information-about-connections-other-locations). In the PRDID column of the command output, you'll see the product IDs associated with requesters, and there you'll see the version and release of the Db2 client code that a requester is using (see https://www.ibm.com/docs/en/db2-for-zos/12?topic=work-product-identifier-prdid-values-in-db2-zos). Using the PRDID information provided via -DISPLAY LOCATION, insert a row (or rows) in DSN_PROFILE_TABLE for the pre-11.1 Db2 client code that is used in your environment. Having done that, for that row (or rows) in DSN_PROFILE_TABLE, insert a corresponding row (or rows) in SYSIBM.DSN_PROFILE_ATTRIBUTES to tell Db2 what you want it to do when one of the DRDA requesters using pre-11.1 Db2 client code requests a connection to the Db2 for z/OS system. And what do you want Db2 to do? You want Db2 to issue SET CURRENT PACKAGE PATH = OLD_COLL (using my example name for the collection into which you BIND COPY-ed the NULLID packages with a specification of APPLCOMPAT(V12R1M500)). This will make OLD_COLL the default collection for the DRDA requester applications using pre-11.1 Db2 client code. Having done this, you can take APPLCOMPT for the NULLID packages higher than V12R1M500, to the benefit of DRDA requester applications that are using 11.1-or-higher versions of the Db2 client code (note that the SET CURRENT PACKAGE PATH = OLD_COLL will happen at application connection time, so after doing the BIND COPY and profile table work you may need to have someone recycle the application servers on which old Db2 client code is running, so they'll get new connections to the Db2 for z/OS system and will be pointed to the OLD_COLL package collection). There is additional information on this approach in an entry I posted to this blog a few years ago (that entry concerns an alternate collection of IBM Data Server Driver / Db2 Connect packages used to get high-performance DBAT functionality, but the collection redirection technique is the same).

And there you have it. I hope you don't have old Db2 client code on your application servers, but if you do, don't worry about that being an impediment to getting to Db2 13, because it isn't.

Thursday, March 30, 2023

Db2 for z/OS: If Index FTB Functionality is Disabled at Your Site, It's Time to Reconsider That

Over the course of the past three weeks, I reviewed ZPARM settings (i.e., configuration parameter values) for three different production Db2 12 for z/OS environments at three different sites, and I noticed that index FTB (fast traverse block) functionality had been disabled in all three cases. I recommended to all three associated Db2-using organizations that they change the relevant ZPARM setting to re-enable FTB functionality, after first validating that the fixes for a set of related Db2 APARs have been applied to their Db2 12 code (the changes made by the fixes are part of the base Db2 13 code). My recommendation for you, if the FTB feature of Db2 has been "turned off" at your site, is to do the same: turn it on, after doing the aforementioned check of Db2 software maintenance if you're using Db2 12. In this blog entry, I'll explain what FTB functionality is, why it was disabled at some sites, and why it's time to go from "disabled" to "enabled" where feature deactivation has happened. I'll also provide information about the fixes (PTFs) that should be on your system to ensure the robust functioning of FTB-supporting Db2 code (again, if we're talking about Db2 12 - the base Db2 13 code has the solidified FTB functionality provided by the Db2 12 fixes).


The FTB raison d'etre: efficient use of non-leaf information in Db2 indexes

Db2 for z/OS indexes serve various purposes, such as assisting with maintenance of a desired ordering of rows in a table and ensuring uniqueness of key values for which duplicates would be problematic, but for the most part indexes in a Db2 system are there to speed the execution of queries (and of non-query SQL statements that contain predicates, aka search clauses). Indexes deliver this performance boost by enabling identification of query result set rows without a laborious row-by-row examination of values. It can be said that Db2 indexes provide shortcuts that get you to your destination (a query's result set) faster than would otherwise be possible.

The information in a Db2 index is arranged in what is known as a B-tree structure. The logical representation of this structure has something of the appearance of an upside-down tree: you have the root page at the top, and the leaf pages at the bottom. In-between the root page and the leaf pages of an index (unless the underlying Db2 table is quite small), you will have one or more levels of non-leaf pages. Finding a row in a table by way of an index on the table involves what is known as an index probe operation: Db2 starts at the root page and navigates down through the other non-leaf levels to reach the leaf page that contains the searched-for key value and the ID of the row (i.e., the row ID, or RID in Db2 parlance) or rows in which the key value can be found.

OK, so what is the value of index fast traverse blocks? Well, an index probe involves GETPAGE activity. A GETPAGE is a Db2 request to examine the contents of a page in an index or a table space (when the page in question is not already in a Db2 buffer pool in memory, the GETPAGE drives a read I/O request). The more rows a table has, the more levels an index on the table can have. More index levels means more GETPAGE activity associated with use of the index, and that matters because GETPAGE activity is one of the main determinants of the CPU cost of executing a query. Index fast traverse block functionality, introduced by Db2 12 for z/OS, improves CPU efficiency for query execution by reducing index-related GETPAGE activity.

An FTB reduces index GETPAGE activity by providing Db2 with a way to get to the leaf page of an index in which a query-predicate-matching key value is found without having to perform a top-to-bottom index probe. How that works: when Db2 builds an FTB structure in memory that is based on a given index, Db2 puts in that FTB structure the information in the non-leaf pages of the index (note that this is NOT just a matter of caching the index's non-leaf pages in memory - the FTB structure has a space requirement that is considerably smaller than what would be required to cache the index's non-leaf pages in an as-is manner); furthermore, navigation through an FTB structure does not require GETPAGE activity. Yes, FTB navigation does involve some instruction path length, but less than would be needed for the index GETPAGEs that would otherwise be required to get to a target leaf page. Let's say that an index on a large table has five levels. Retrieving a table row via the index will require six GETPAGEs - five for the index and one for the table space. If, on the other hand, Db2 has built an FTB structure from the index, when a query having a predicate that matches on the index's key is executed then Db2 can go to the FTB structure with the key value referenced in the predicate, and the FTB will tell Db2, "This is the leaf page in which you'll find that key value." Db2 then does one GETPAGE to examine that leaf page's contents, finds the key value and the associated RID, and does one more GETPAGE to access the row in the table space. Thanks to the FTB, we've gone from six GETPAGEs (five for the index and one for the table space) to two GETPAGEs (one for the index leaf page, one for the table space). Pretty good.


How is FTB functionality turned off, and why would anyone do that?

The FTB "on/off switch" is the ZPARM parameter INDEX_MEMORY_CONTROL. The default value for that parameter is AUTO. When INDEX_MEMORY_CONTROL is set to AUTO, Db2 notes the size of the subsystem's buffer pool configuration (i.e., the aggregate size of the subsystem's buffer pools) and says (figuratively speaking), "OK, I can create FTB structures from indexes, and the maximum amount of in-memory space I'll use for those FTB structures is equivalent to 20% of the size of the buffer pool configuration." Note that this is not space taken away from the buffer pools - it's net additional use of the z/OS LPAR's real storage by Db2. Consider an example: Db2 subsystem DB2P has 50 GB of buffer pools. If INDEX_MEMORY_CONTROL for DB2P is set to AUTO, DB2P can use up to 10 GB (20% times 50 GB) of memory for index FTBs. The size of the DB2P buffer pool configuration is not affected by FTBs - it remains at 50 GB. Got it?

Besides AUTO, another acceptable value for INDEX_MEMORY_CONTROL is an integer between 10 and 200,000. That would set the FTB memory usage limit in terms of megabytes. Using the previous example, if the z/OS LPAR in which subsystem DB2P is running is generously configured with memory, the organization might decide to set INDEX_MEMORY_CONTROL to 20000 if they want Db2 to be able to use up to about 20 GB of memory for index FTBs, versus the 10 GB limit established via the AUTO setting (20% of the 50 GB size of the buffer pool configuration assumed for the example). If, on the other hand, the z/OS LPAR's memory resource is quite limited, the organization might opt to set INDEX_MEMORY_CONTROL to 1000, to restrict DB2P's use of memory for index FTBs to about 1 GB (I say, "about," because 1 GB is actually 1024 MB).

INDEX_MEMORY_CONTROL can also be set to DISABLE. That has the effect of turning FTB functionality off. Why would someone disable a CPU efficiency-boosting Db2 feature? Well, relatively early on in the lifecycle of Db2 12 for z/OS (which became generally available in October of 2016), a few sites encountered some issues related to index FTB functionality. In some cases, use of an FTB was seen to cause a query to return incorrect output. These situations were pretty uncommon (recall that index FTB functionality is on by default, and most Db2 12 sites with INDEX_MEMORY_CONTROL set to AUTO encountered no problems in leveraging the technology), but they were real. Some organizations heard that other organizations had had some problems related to FTB usage, so they disabled the feature as a preemptive measure. I get it.


Why using FTB functionality makes sense now

In response to the FTB-related issues mentioned above, the IBM Db2 for z/OS development team created a number of code fixes that addressed the problems reported by Db2-using organizations. These fixes and their associated APARs (an APAR is an official description of a software problem for which IBM commits to providing corrective service) are noted in a blog entry, written by members of the Db2 development organization, that can be viewed at https://community.ibm.com/community/user/datamanagement/blogs/paul-mcwilliams1/2020/10/08/new-look-ftb-db2-12. If INDEX_MEMORY_CONTROL is set to DISABLE at your site, and if you are using Db2 12 for z/OS, check to see if the PTFs listed in this blog entry have been applied to your Db2 code. If they have been applied (or if you are using Db2 13), you can use index FTB functionality with confidence. If you are using Db2 12 and the fixes have not been applied in your environment, my recommendation is to get them applied, perhaps as part of a roll-out of a new and more-current level of z/OS software maintenance at your site.

The confidence that the IBM Db2 for z/OS development team has in FTB functionality, with the corrective maintenance applied, is evidenced by a couple of things. First, Db2 12 function level 508 extended FTB functionality to non-unique indexes (it had originally been limited to unique indexes). Second, Db2 13 for z/OS makes FTB functionality available for a larger set of indexes by doubling the key-length limit for FTB-qualifying indexes - from 64 bytes to 128 bytes for unique indexes, and from 56 bytes to 120 bytes for non-unique indexes (as previously mentioned, the code corrections made for Db2 12 by the FTB-related fixes listed in the above-referenced blog entry are part of the Db2 13 base code). The Db2 development team would not have made FTB functionality available for a wider range of indexes if they were anything less than highly confident in the quality of the FTB-supporting code.

Note that if you have INDEX_MEMORY_CONTROL set to DISABLE, and you're interested in turning FTB functionality on but would like to do so in a more-controlled and more-limited way before going to a setting of AUTO, that option is available to you. As noted in the blog entry for which I provided the link, above, and in the Db2 12 and Db2 13 online documentation, you can tell Db2, via a specification of (SELECTED, AUTO) or (SELECTED, n) for INDEX_MEMORY_CONTROL (where n would be a user-designated limit, in MB, on the memory that Db2 can use for FTB structures), that FTB structures can be built only for indexes that you have identified as FTB candidates by way of the SYSINDEXCONTROL table in the Db2 catalog.

In summary, if you have the FTB-solidifying fixes applied in your Db2 12 environment, or if you are running with Db2 13, and you have INDEX_MEMORY_CONTROL set to DISABLE, you should rethink that. The current FTB code is very robust, and if you don't leverage the functionality then you're leaving CPU savings on the table. I'd prefer to see you realize those CPU savings.

Thursday, February 23, 2023

Two RMF (z/OS Monitor) Reports with which Db2 for z/OS People Should be Familiar

When it comes to analyzing the performance of a Db2 for z/OS system and associated applications, I think of a set of concentric circles. The outermost circle represents the operational environment in which Db2 is processing work - that would be the z/OS LPAR (logical partition) in which the Db2 subsystem runs. The next circle within that outer one represents the Db2 subsystem itself - its buffer pool configuration, EDM pool, RID pool, lock manager, recovery log, etc. The innermost of these concentric circles relates to the applications that access Db2-managed data. With the three circles in mind, I take an "outside-in" approach to Db2 system and application tuning. In other words, I begin with a look at the z/OS system within which Db2 is running, then I turn to the Db2 subsystem itself and lastly I analyze application-centric information. The reason for this approach? If the z/OS system in which Db2 is running is constrained in some way, there's a good chance that Db2 subsystem and application tuning actions will yield little positive impact. Similarly, if the Db2 subsystem is operating in a constrained fashion then application tuning actions may not do much good.

So, if assessing the operation of a z/OS system is important prior to turning to Db2 subsystem and/or application performance analysis, how do you determine whether the z/OS LPAR in question is running in a constrained or an unconstrained way? I do this based on examination of information in two RMF reports: the Summary report and the CPU Activity report. If you support a Db2 for z/OS system, you should be familiar with the content of these reports - in particular, some key metrics provided by the reports. In this blog entry I'll point out those key metrics and explain how I use them.

[Note: I am referring to reports generated by IBM's RMF z/OS monitor because RMF is the z/OS monitor with which I am most familiar. If your organization uses another vendor's z/OS monitor, that monitor might be able to generate reports similar to the RMF reports that are the subject of this blog entry - if need be, check with the vendor on that.]


The RMF CPU Activity report

RMF, by default (you can change this), carves the time period covered by a CPU Activity report into 15-minute intervals (so, if you requested a report for a one-hour time period you will see within that report four sub-reports, each providing information for a 15-minute part of the overall one-hour time period). Within a given 15-minute interval you will see, for the z/OS LPAR for which the report was requested, information that looks something like this (I highlighted two important values in red):


---CPU---    ---------------- TIME % ----------------
NUM  TYPE    ONLINE    LPAR BUSY    MVS BUSY   PARKED
 0    CP     100.00    87.03        86.85        0.00
 1    CP     100.00    77.76        77.68        0.00
 2    CP     100.00    83.88        83.78        0.00
 3    CP     100.00    87.07        86.91        0.00
 4    CP     100.00    76.23        76.14        0.00
 5    CP     100.00    76.79        76.71        0.00
 6    CP     100.00    80.45        80.35        0.00
 7    CP     100.00    73.29        73.24        0.00
 8    CP     100.00    63.83        69.22        0.00
 9    CP     100.00    57.78        62.95        0.00
 A    CP     100.00    35.28        48.33       17.01
TOTAL/AVERAGE          72.67        75.16
12    IIP    100.00    66.63        58.68        0.00
                                    46.30        0.00
13    IIP    100.00    26.70        23.42        0.00
                                    18.24        0.00
14    IIP    100.00     9.21         8.07        0.00
                                     6.42        0.00
3E    IIP    100.00     0.00        -----      100.00
                                    -----      100.00
TOTAL/AVERAGE          25.64        26.86


Here is an explanation of what you see in the report snippet above:
  • NUM - This is the ID of a given "engine" (processor core) configured for the LPAR.
  • TYPE - CP is short for central processor (typically referred to as a "general-purpose engine"); IIP is short for integrated information processor (typically referred to as a "zIIP engine").
  • LPAR BUSY - Engine utilization from the LPAR perspective.
  • MVS BUSY - I think of this as the busy-ness of the physical engine - if the engine is used exclusively (or almost exclusively) by the LPAR in question, the LPAR busy and MVS busy numbers should be very similar.
  • PARKED - This is the extent to which an engine's capacity is NOT available to the LPAR during the 15-minute interval (so, if the engine is seen to be 75% parked then the LPAR has access to 25% of that engine's processing capacity). When engines in a mainframe "box" (sometimes called a CEC - short for central electronic complex) are shared between several LPAR's it's not unusual to see a non-zero parked value for at least some of an LPAR's engines).
Note that for this LPAR, there are two MVS BUSY values for each zIIP engine. Why is that? Well, it indicates that the zIIP engines are running in SMT2 mode. SMT2 is short for simultaneous multi-threading 2, with the "2" meaning that z/OS can dispatch two pieces of work simultaneously to the one zIIP core. Running a zIIP engine in SMT2 mode does not double the engine's capacity (each of the two pieces of work dispatched to the one zIIP core will not run as fast as would be the case if the zIIP engine were running in "uni-thread" mode), but for a transactional workload SMT2 can enable a zIIP engine to deliver around 25-40% more throughput versus uni-thread mode (think of a one-way, single-lane road with a speed limit of 60 miles per hour versus a one-way, 2-lane road with a speed limit of 45 miles per hour - the latter will get more cars from A to B in a given time period if there's enough traffic to take advantage of the two lanes). For more information on SMT2 mode for zIIPs, see the entry on that topic that I posted to this blog a few years ago.

OK, to the values highlighted in red in the report snippet:
  • TOTAL/AVERAGE MVS BUSY for the general-purpose engines (75.16 in the report snippet) - As a general rule, application performance - especially for transactional applications (e.g., CICS-Db2, IMS TM-Db2, Db2 DDF) - will be optimal if average MVS busy for an LPAR's general-purpose engines does not exceed 80%. When average MVS busy for the general-purpose engines exceeds 90%, you can see a negative impact on the performance of Db2-accessing applications in the form of what is labeled "not accounted for" time in a Db2 monitor-generated accounting long report or an online monitor display of Db2 thread detail information. Not-accounted-for time is in-Db2 elapsed time that is not CPU time and not one of the "known" Db2 wait times (those are so-called class 3 wait times, such as wait for database synchronous read, wait for lock/latch, wait for update/commit processing, etc.). It's literally elapsed time, related to SQL statement execution, for which Db2 cannot account. In my experience, in-Db2 not-accounted-for time is most often a reflection of wait-for-dispatch time, which itself is indicative of CPU contention. I'm generally not too concerned about not-accounted-for time as long as it's less than 10% of in-Db2 elapsed time for an application workload - particularly when it's a higher-priority transactional workload (you might tolerate a higher percentage of not-accounted-for time for a lower-priority batch workload). If not-accounted-for time exceeds 10% of in-Db2 elapsed time (again, especially for a higher-priority transactional workload), that would be a matter of concern for me, indicating that CPU contention is negatively impacting application throughput.
  • TOTAL/AVERAGE MVS BUSY for the zIIP engines (26.86 in the report snippet) - How "hot" can you run zIIP engines before zIIP engine contention becomes a concern? That depends on how many zIIP engines the LPAR has (and, to a lesser extent, whether or not the zIIPs are running in SMT2 mode). The more zIIP engines an LPAR has, the higher the average MVS busy figure for those engines can go before zIIP contention becomes an issue (in the example shown above, the LPAR has three zIIP engines that are running in SMT2 mode - in such a situation average MVS busy for the zIIP engines could probably go to 40-50% without zIIP contention becoming an issue). And when does zIIP contention become an issue? When the zIIP spill-over percentage gets too high, as explained in an entry I posted a few years ago to this blog. [Note: the report snippet shows four zIIP engines, but the fourth - the one identified as processor number 3E - is 100% parked from the LPAR's perspective. That means the LPAR had no access to zIIP processor 3E's capacity, so in effect the LPAR had three zIIP engines during the time interval.]
Below the information shown in the report snippet above, you'll see a sideways bar chart that looks something like this (again, I've highlighted some key information in red):


-----------------------DISTRIBUTION OF IN-READY WORK UNIT QUEUE-
 NUMBER OF              0    10   20   30   40   50   60   70   
 WORK UNITS     (%)     |....|....|....|....|....|....|....|....

<=  N          55.9     >>>>>>>>>>>>>>>>>>>>>>>>>>>>
 =  N +   1     3.5     >>
 =  N +   2     3.1     >>
 =  N +   3     3.5     >>
<=  N +   5     5.5     >>>
<=  N +  10    10.9     >>>>>>
<=  N +  15     5.7     >>>
<=  N +  20     4.2     >>>
<=  N +  30     3.1     >>
<=  N +  40     1.5     >
<=  N +  60     1.3     >
<=  N +  80     0.4     >
<=  N + 100     0.2     >
<=  N + 120     0.1     >
<=  N + 150     0.2     >
>   N + 150     0.2     >

N = NUMBER OF PROCESSORS ONLINE UNPARKED (16.8 ON AVG)


With regard to the report snippet above, the first thing to which I want to draw your attention is the bottom-line information about "N". We see that, for this LPAR during this 15-minute interval, N = 16.8. What does that mean? It means that the LPAR had "16.8 processor targets to which pieces of work could be dispatched." Why do I use the phrase "processor targets" instead of "processors?" It's because we tend to think of "mainframe processors" as meaning "mainframe engines," and that's not quite the case here. This report snippet goes with the first one we looked at (the second snippet appears right after the first one in the source RMF CPU Activity report), and you might recall that the first snippet showed that the LPAR's three zIIP engines are running in SMT2 mode. For that reason, those three zIIP engines are counted as six processor targets to which pieces of work can be dispatched. Thus, when the report shows that N = 16.8, we can say that 6 of the 16.8 relate to the LPAR's zIIP engines. That leaves 10.8 (16.8 - 6 = 10.8). We've accounted for the zIIP engines, so the 10.8 number relates to general-purpose engines. Where does that 10.8 come from? Refer again to the first report snippet. You'll see that the LPAR had 10 general-purpose processors that were not at all parked (i.e. that were 0% parked from the LPAR's perspective). An 11th general-purpose engine, identified as processor number A, was 17.01% parked during the interval. That means that 83% of the capacity of general-purpose processor number A (that's a hexadecimal A) was available to the LPAR during the time interval. That 83% is equivalent to 0.83, and RMF rounds 0.83 down to 0.8, and that's where the ".8" of N = 16.8 comes from. So, then, the LPAR had 6 zIIP "targets" to which work could be dispatched (3 engines, each running in SMT2 mode), and 10.8 general-purpose targets to which work could be dispatched, and that's why we have N = 16.8.

With N now understood, we can turn our attention to the other bit of information I highlighted in red: <= N 55.9. What does that mean? It means that for 55.9% of the time in the 15-minute report interval, the number of "in and ready" tasks (i.e., the number of tasks ready for dispatch) was less than or equal to the number of processor targets to which pieces of work in the LPAR could be dispatched. When that is true - when the number of in-and-ready tasks is <= N - there is essentially nothing in the way of CPU constraint, because an in-and-read task won't have to wait in line to get dispatched to a processor. In my experience, when the <= N figure is above 80%, the LPAR is very unconstrained in terms of processing capacity. A figure between 50% and 80% is indicative of what I'd call moderate CPU constraint, and performance (particularly in terms of throughput) is likely not impacted much by a lack of processing capacity. When the figure is below 50%, I'd say that CPU constraint could be impacting throughput in a noticeable way, and if it's below 10% the performance impact of CPU constraint for the LPAR could be severe. As previously mentioned, the Db2 performance impact of a CPU-constrained system is typically apparent in elevated levels of in-Db2 not-accounted-for time, as seen in a Db2 monitor-generated accounting long report or an online monitor display of Db2 thread detail information.

One more thing about an RMF CPU Activity report: the number of engines configured for an LPAR - something that is shown in the report - should be balanced by an adequate amount of memory (aka real storage) so that the LPAR's processing power can be fully exploited to maximize application performance. For a z/OS LPAR in which a production Db2 workload runs, my rule of thumb, based on years of analyzing system and application performance data, is this: the LPAR should have at least 20 GB of memory per engine with which it is configured. The first report snippet included above shows that the LPAR has 13.8 engines: 10.8 general-purpose engines (as previously mentioned, the ".8" relates to an engine that is about 20% parked from the LPAR's perspective) and 3 zIIP engines (and for balanced-configuration purposes, I count physical zIIP cores - I don't double-count a zIIP engine because it is running in SMT2 mode). I'd round the 13.8 to 14 (the nearest integer) and say that on that basis the LPAR should have at least 14 X 20 GB = 280 GB of memory. If that seems like a lot to you, it shouldn't - mainframe memory sizes are getting bigger all the time, and real storage resources in the hundreds of GB are no longer unusual for production z/OS LPARs, especially those in which Db2 workloads run (the biggest real storage size I've personally seen for a z/OS LPAR is about 1100 GB).


The RMF Summary report

An RMF Summary report is smaller in size than a CPU Activity report - typically, one line of information for each 15 minute interval within the report time period. An RMF Summary report for a one-hour period could look something like what you see below (I removed some columns so that I could use a big-enough-to-read font size - the really important column is the one on the far right, with the numbers highlighted in green):

 
NUMBER OF INTERVALS 4      TOTAL LENGTH OF INTERVALS 00.59.58
-DATE   TIME     INT  ... JOB  JOB  TSO  TSO  STC ... SWAP DEMAND
 MM/DD HH.MM.SS MM.SS ... MAX  AVE  MAX  AVE  MAX ... RATE PAGING
 11/03 09.15.00 15.00 ...  83   72   96   92  371 ... 0.00   0.00
 11/03 09.30.00 14.59 ...  85   68   98   95  369 ... 0.00   0.00
 11/03 09.45.00 15.00 ...  75   68   95   92  363 ... 0.00   0.00
 11/03 10.00.00 14.59 ...  82   70   94   91  365 ... 0.00   0.00
-TOTAL/AVERAGE        ...  85   69   98   93  371 ... 0.00   0.00


So, what's the LPAR's demand paging rate? It's the rate, per second, at which pages that had been moved by z/OS from real to auxiliary storage (to make room for other pages that needed to be brought into real storage) were brought back into real storage on-demand (i.e., because some process needs to access the page). Why is the demand paging rate important? Here's why: it is, in my opinion, the best indicator of whether or not memory usage can be expanded without putting too much pressure on the LPAR's real storage resource. Here's what I mean by that: suppose you have a Db2 buffer pool that has a total read I/O rate (synchronous reads + sequential prefetch reads + list prefetch reads + dynamic prefetch reads, per second) that's higher than you like - maybe the total read I/O rate for the pool is north of 1000 per second, and you want to bring that down substantially to boost application performance and CPU efficiency (every read I/O eliminated saves CPU and elapsed time). The best way to lower a buffer pool's total read I/O rate is to make the pool larger. Can you do that without putting too much pressure on the LPAR's real storage resource? Here's what I'd say: If the LPAR's demand paging rate is consistently less than 1 per second, you have a green light for using more memory to get a performance boost. If the LPAR's demand paging rate is consistently greater than 1 per second, I'd hold off on using more memory until the LPAR is configured with additional real storage. This goes for any action that would increase memory usage by DB2 - besides enlarging a buffer pool, that could be a RID pool or a sort pool or an EDM pool size increase, or increasing the use of RELEASE(DEALLOCATE) packages with threads that persist through commits, or whatever. Before doing something that will increase memory usage, check the LPAR's demand paging rate.

That's it for now. If you haven't had a look at these two RMF reports before, get them for an LPAR of interest to you and give them a look-see - a z/OS systems programmer at your site would probably be happy to generate the reports for you. Knowing the key utilization and configuration characteristics of the z/OS LPAR in which a Db2 subsystem runs is an important part of effective performance management of the Db2 environment.