Thursday, August 31, 2023

Db2 for z/OS: An Important Difference Between Data in Memory and Data on Disk

For the past several years, I've tried to post an entry per month to this blog. Sometimes, it will be very near the end of a month, and I haven't posted anything since the previous month, and I don't have any good ideas for a new entry. Then, I'll have an exchange with someone - could be via email, a phone call, a face-to-face discussion, whatever - and BOOM! Right there I'll find the inspiration for a blog entry. That happened again yesterday - the day before the last day of the month. Whew!

Here's what happened: in an email message, an IT professional recounted to me a situation that had her perplexed. The scenario: 11,000 rows of data were loaded into a table using the IBM Db2 for z/OS LOAD utility. Immediately after the completion of that load job, a program that updated 8 of those just-loaded rows executed and ran successfully to completion. Right after that, an unload job for the table in question was executed. This unload was performed using an IBM software product called Db2 High Performance Unload for z/OS, or HPU, for short (see https://www.ibm.com/docs/en/dhpufz/5.1.0?topic=documentation-db2-high-performance-unload-overview). HPU has two modes of operation: it can perform an unload by operating directly on the VSAM data set(s) associated with the target table, or it can do the unload through Db2, in which case the data is accessed in memory (i.e., in the buffer pool to which the table's table space is assigned). This unload was done in the former of these modes - operating directly on the VSAM data set(s) associated with the table's table space. The result of the unload surprised the person who emailed me. How so? Well, the unload was done using a predicate (the WHERE clause that you might see in a query), and the update program that ran between the load (of the 11,000 rows) and the unload changed values in a way that should have caused 8 of the 11,000 loaded rows to be filtered out by the unload process's predicate (the other 10,992 rows would be qualified by the predicate). The person who emailed me expected 10,992 records in the unload data set, but there were in fact 11,000 rows in that data set. The updates that should have caused 8 rows to be not-qualified by the unload process's predicate were committed before the unload job ran, so why was this update action not reflected in the contents of the unload data set? Consternation increased when another unload of the table, executed a few hours later (again, with the unload process using a predicate and operating directly on the table's associated VSAM data set(s)), generated an unload data set that did contain the expected 10,992 rows.

What in the world was going on here?

Here's what was going on: this all has to do with a big difference between a committed data change (which has relevance for Db2 data-in-memory) and an externalized data change (which relates to Db2 data-on-disk). What's important to know is that Db2 for z/OS does not externalize data changes (i.e., does not write changed data to the associated VSAM data set) as part of commit processing. Instead, database write I/O operations (to externalize data changes to VSAM data sets on disk) are done in a deferred way (and usually asynchronously, at that). This aspect of Db2's operation is critically important to scalability when it comes to data-change operations (e.g., INSERT, UPDATE and DELETE). If Db2 had to write changed pages to disk at commit time, data-change throughput would be majorly impacted in a negative way. In the scenario described above, the first unload generated by HPU (done right after the programmatic update of 8 of the 11,000 rows previously LOAD-ed into the table), operating directly on the table space's underlying VSAM data set(s), did not reflect the post-LOAD update of the 8 rows because the page(s) changed by the updating program were not written to the underlying VSAM data set(s) at commit time. The changed page(s) were externalized later by Db2 via deferred write processing, and that is why the second unload process, also operating directly on the table space's VSAM data set(s), reflected the aforementioned updates of 8 of the 11,000 table rows.

If Db2 deferred write action did eventually get the changed pages (associated with the updating of 8 of the table's rows) written to the associated VSAM data sets on disk - and it did - then what caused that deferred write action to happen? Usually Db2 deferred write operations for a given buffer pool are driven by one of two deferred write thresholds being reached for the pool. The deferred write queue threshold (abbreviated as DWQT) is expressed as a percentage of the total number of buffers allocated for a pool that are occupied by changed-but-not-externalized pages (the default value is 30), and the vertical deferred write queue threshold (VDWQT) is expressed as a percentage of the pool's buffers that are occupied by changed-but-not-externalized pages that belong to a particular data set (the default value is 5). Whenever either of those limits is reached (and it's usually the VDWQT limit), deferred write activity is triggered. The deferred write I/Os, by the way, are generally multi-page in nature (multiple pages written to disk in one I/O operation), and that is good for CPU-efficiency on a per-page basis. The CPU time associated with database writes (usually not much - I/Os don't require many CPU cycles) is charged to the Db2 database services address space (aka the DBM1 address space).

What about synchronous database write actions? Are they also deferred? Yes, they are - they're just triggered by something besides deferred write queue thresholds being reached. In the case of synchronous writes, the trigger is Db2 checkpoint processing. How that works: when Db2 executes a system checkpoint (which it does, by default, every 3 minutes), it notes all pages in each buffer pool that are in changed-but-not-yet-externalized status. When the next system checkpoint rolls around, Db2 checks to see if any of the changed-but-not-yet-externalized pages noted at the last checkpoint have still not been written to disk. If there are any such pages then they will be synchronously written to disk as part of checkpoint processing. Here, "synchronous" means that Db2 will immediately start writing those pages to disk, and it will continue to do that until they are all externalized.

OK, back to the story that prompted this blog entry. Is there a way that the initial HPU unload (the one executed very shortly after the programmatic update of 8 of the 11,000 rows LOAD-ed into the table) could have generated an unload data set with the desired 10,992 rows? Yes. In fact, there were at least two options for getting that done. One option would be to execute the Db2 QUIESCE utility for the table's table space prior to running the HPU unload. This would cause Db2 to write all changed-but-not-yet-externalized pages of the table's table space to disk, and then an HPU unload operating directly on the table space's VSAM data sets would have reflected the update of the 8 rows.

The second option would be to have HPU do the unload through Db2, as opposed to operating directly on the table space's underlying VSAM data sets - this is something that can be done through an HPU keyword. That, in turn, would have caused the HPU unload to be accomplished using data in memory (i.e., in the table space's assigned buffer pool) - any of the to-be-unloaded pages that were not already in memory would have been read into memory to as part of the unload process. This approach would have reflected the programmatic update of the 8 rows because those updates had been committed, and Db2 data-in-memory is always in a transactionally consistent state (any in-memory data that is not transactionally consistent because of an in-flight - that is, not-yet-completed - data change operation is blocked from access by X-type locks, taken at a page or a row level, that are not released until the data changes in question are committed).

Which of these options would you choose? It would depend on what is most important for you. The QUIESCE option would allow the HPU unload to operate directly on the VSAM data set(s) associated with the table space, and that would yield a CPU efficiency benefit, but the QUIESCE itself could be at least somewhat disruptive for applications accessing the target table. The "through Db2" option would not disrupt any application processes that were accessing the table at the time, but it would cause the HPU unload operation to consume some additional CPU time.

By the way, if you're wondering, "If committed data changes are written to disk in a deferred way, how is loss of committed data changes prevented in the event of an abnormal termination (i.e., a "crash") of the Db2 subsystem that happens when there are changed-but-not-yet-externalized pages in memory?" Worry not - data recorded in the Db2 transaction log is used to process those "pending writes" as part of the "roll-forward" phase of Db2 restart processing following a subsystem failure.

One more thing: the mechanics of all this are different in a Db2 data sharing environment (involving group buffer pool writes and associated castout operations to eventually get changed pages written to VSAM data sets on disk), but the net effect is the same.

And there you have it. I'm thankful for your visiting this blog, and I'm thankful for interesting questions that come in when I'm trying to figure out what I'm going to blog about.

4 comments:

  1. Thanks Robert . Things are always explained so simply and concisely. I could read your blogs once and gain an understanding as opposed to reading something multiple times on the same subject and still not know what they on about. Thank you man !

    ReplyDelete
  2. I appreciate the kind words.

    Robert

    ReplyDelete
  3. Im a developer using the Db2 application, not aware of what you said untill now. Great explanation Rob.

    ReplyDelete