Wednesday, December 29, 2021

Db2 for z/OS Data Sharing: is your Lock Structure the Right Size?

Recently I've encountered some situations in which organizations running Db2 for z/OS in data sharing mode had lock structures that were not sized for maximum benefit. In this blog entry, my aim is to shed some light on lock structure sizing, and to suggest actions that you might take to assess lock structure sizing in your environment and to make appropriate adjustments.

First, as is my custom, I'll provide some background information.


One structure, two purposes

A lot of you are probably familiar with Db2 for z/OS data sharing. That is a technology, leveraging an IBM Z (i.e., mainframe) cluster configuration called a Parallel Sysplex, that allows multiple Db2 subsystems (referred to as members of the data sharing group) to share read/write access to a single instance of a database. Because the members of a Db2 data sharing group can (and typically do) run in several z/OS LPARs (logical partitions) that themselves run (usually) in several different IBM Z servers, Db2 data sharing can provide tremendous scalability (up to 32 Db2 subsystems can be members of a data sharing group) and tremendous availability (the need for planned downtime can be virtually eliminated, and the impact of unplanned outages can be greatly reduced).

One of the things that enables Db2 data sharing technology to work is what's called global locking. The concept is pretty simple: if an application process connected to member DBP1 of a 4-way (for example) Db2 data sharing group changes data on page P1 of table space TS1, a "local" X-lock (the usual kind of Db2 lock associated with a data-change action) on the page keeps other application processes connected to DBP1 from accessing data on the page until the local X-lock is released by way of the application commit that "hardens" the data-change action. All well and good and normal, but what about application processes connected to the other members of the 4-way data sharing group? How do they know that data on page P1 of table space TS1 is not to be accessed until the application process connected to member DBP1 commits its data-changing action? Here's how: the applications connected to the other members of the data sharing group know that they have to wait on a commit by the DBP1-connected application because in addition to the local X-lock on the page in question there is also a global lock on the page, and that global lock is visible to all application processes connected to other members of the data sharing group.

Where does this global X-lock on page P1 of table space TS1 go? It goes in what's called the lock structure of the data sharing group. That structure - one of several that makes Db2 data sharing work, others being the shared communications area and group buffer pools - is located in a shared-memory LPAR called a coupling facility, and the contents of the structure are visible to all members of the data sharing group because all the members are connected to the coupling facility LPAR (and, almost certainly, to at least one other CF LPAR - a Parallel Sysplex will typically have more than one coupling facility LPAR so as to preclude a single-point-of-failure situation).

Here's something kind of interesting: a global lock actually goes to two places in the lock structure (if it's an X-lock, associated with a data-change action, versus an S-lock, which is associated with a data-read request). Those two places are the two parts of the lock structure: the lock table and the lock list:

  • The lock table can be thought of as a super-fast global lock contention detector. How it works: when a global X-lock is requested on a page (or on a row, if the table space in question is defined with row-level locking), a component of the z/OS operating system for the LPAR in which the member Db2 subsystem runs takes the identifier of the resource to be locked (a page, in this example) and runs it through a hashing algorithm. The output of this hashing algorithm relates to a particular entry in the lock table - basically, the hashing algorithm says, "To see if an incompatible global lock is already held by a member Db2 on this resource, check this entry in the lock table." The lock table entry is checked, and in a few microseconds the requesting Db2 member gets its answer - the global lock it wants on the page can be acquired, or it can't (at least not right away - see the description of false contention near the end of this blog entry). This global lock contention check is also performed for S-lock requests that are associated with data-read actions.
  • The lock list is, indeed (in essence), a list of locks - specifically, of currently-held global X-locks, associated with data-change actions. What is this list for? Well, suppose that member DBP1 of a 4-way data sharing group terminates abnormally (i.e., fails - and that could be a result of the Db2 subsystem failing by itself, or terminating abnormally as a result of the associated z/OS LPAR or IBM Z server failing). It's likely that some application processes connected to DBP1 were in the midst of changing data at the time of the subsystem failure, and that means that some data pages (or maybe rows) were X-locked at the time of the failure. Those outstanding X-locks prevent access to data that is in an uncommitted state (because the associated units of work were in-flight at the time of the failure of DBP1), but that blocking of access to uncommitted data is only effective if the other members of the data sharing group are aware of the retained page (or row) X-locks (they are called "retained locks" because they will be held until the failed Db2 subsystem can be restarted to release them - restart of a failed Db2 subsystem is usually automatic and usually completes quite quickly). The other members of the data sharing group are aware of DBP1's retained X-locks thanks to the information in the lock list.


For the lock structure, size matters - but how?

If you're implementing a new data sharing group, 128 MB is often a good initial size for a lock structure in a production environment (assuming that coupling facility LPAR memory is sufficient). Suppose that you have an existing lock structure. Is it the right size? To answer that question, you have to consider the two aforementioned parts of the lock structure: the lock list and the lock table. If the lock list is too small, the effect will often be quite apparent: data-changing programs will fail because they can't get the global X-locks they need, owing to the fact that the lock list is full (the SQL error code in that case would be a -904, indicating "resource unavailable," and the accompanying reason code will be 00C900BF). Certainly, you'd like to make the lock list part of the lock structure larger before it fills up and programs start abending. To stay ahead of the game in this regard, you can look for instances of the Db2 (actually, IRLM) message DXR142E, which shows that the lock list is X% full, with "X" being 80, 90 or 100. Needless to say, 100% full is not good. 90% full will also likely result in at least some program failures. Suffice it to say that if this message is issued on your system, 80% is the only "in-use" value that you want to see, and if you see "80% in-use" you'll want to make the lock list bigger (you can also issue, periodically, the Db2 command -DISPLAY GROUP, or generate and review an RMF Coupling Facility Activity report - in either case, see what percentage of the list entries in the lock structure are in-use).

If you want or need to make the lock list part of a lock structure bigger, how can you do that? Really easily, if the maximum size of the lock structure (indicated by the value of the SIZE specification for the structure in the coupling facility resource management - aka CFRM - policy) is larger than the structure's currently allocated size (as seen in the output of the Db2 -DISPLAY GROUP command, or in an RMF Coupling Facility Activity Report). When that is true, a z/OS SETXCF command can be issued to dynamically increase the lock structure size, and all of the space so added will go towards enlarging the lock list part of the structure. If the lock structure's current size is the same as the maximum size for the structure, lock list enlargement will require a change in the structure's specifications in the CFRM policy, followed by a rebuild of the structure (that rebuild will typically complete in a short time, but it can impact Db2-accessing application processes, so take the action at a time when system activity is at a relatively low level). More information on lock list size adjustments can be found in an entry I posted to this blog back in 2013.

How about the lock table part of the lock structure? Is yours large enough? In contrast to the lock structure, the lock table can't "run out of space" and cause programs to fail. How, then, do you know if a lock table size increase would be advisable? The key here is the "false contention" rate for the data sharing group (or for a member or members of the group - both a Db2 monitor statistics report and an RMF Coupling Facility Activity report can be used to see false contention at either the group or a member level). What is "false contention?" It's global lock contention that is initially perceived but later found to be not real. How does that happen? Recall the previous reference to the lock table and the associated hashing algorithm. A lock table has a certain number of entries, determined by the size of the table and the size of the lock entries therein. The size of a lock table entry will vary according to the number of members in a data sharing group, but for my example of a 4-way group the lock entry size would be 2 bytes. If the lock structure size is 128 MB, the lock table size will be 64 MB (the lock table part of the lock structure will always have a size that is a power of 2). That 64 MB lock table will accommodate a little over 33 million two-byte lock entries. 33 million may sound like a lot, but there are probably way more than 33 million lock-able things in the data sharing group's database (think about the possibility of row-level locking for a 100 million-row table, and you're already way past 33 million lock-able things).

How do you manage global lock contention detection with a 33 million-entry lock table when there are more (probably WAY more) than 33 million lock-able things in the database? That's where the hashing algorithm comes in. That algorithm will cause several different lock-able things to hash to a single entry in the lock table, and that reality makes false contention a possibility. Suppose an application process connected to member DBP1 of the 4-way data sharing group needs a global X-lock on page P1 of table space TS1. DBP1 propagates that request to the lock table, and receives in response an indication that contention is detected. Is it real contention? The z/OS LPARs in the Sysplex can communicate with each other to make that determination, and it may be found that the initial indication of contention was in fact false, meaning that incompatible lock requests (i.e., an X and and X, or an X and an S) for two different resources hashed to the same entry in the lock table. When the initially indicated contention is seen to be false, the application that requested the global lock (the request for which contention was initially indicated) gets the lock and keeps on trucking. That's a good thing, but resolving false contention involves some overhead that you'd like to minimize. Here's where lock table sizing comes in. As a general rule, you like for false contention to account for less than half of the global lock contention in the data sharing group (the total rate of global lock contention and the rate of false contention can both be seen via a Db2 monitor statistics report or an RMF coupling facility activity report).

What if the rate of false contention in your system is higher than you want it to be? In that case, consider doubling the size of the lock table (remember, the lock table size will always be a power of 2), or even quadrupling the lock table size, if the rate of false contention is way higher than you want and the CF LPAR holding the structure has sufficient memory to accommodate the larger lock structure size. If you double the lock table size, you'll double the number of lock entries, and that will mean that fewer lock-able things will hash to the same lock table entry, and THAT should result in a lower false contention rate, boosting the CPU efficiency of the data sharing group's operation.

Here's something you need to keep in mind with regard to increasing the size of a Db2 data sharing group's lock table: this cannot be done without a rebuild of the associated lock structure. As previously mentioned, a lock structure's size can be dynamically increased via a SETXCF command (assuming the structure is not already at its maximum size, per the CFRM policy), but all of that space dynamically added will go towards increasing lock list space, not lock table space. A bigger lock table will require a change in the lock structure's specifications in the CFRM policy, followed by a rebuild of the structure to put the change in effect. If the lock structure's current size is 128 MB, you might change its INITSIZE in the CFRM policy (the initial structure size) to 256 MB, and rebuild the structure to take the lock table size from 64 MB to 128 MB. A lock structure size that's a power of 2 is often reasonable - it will cause lock structure space to be divided 50-50 between the lock table and the lock list. Note that if the lock structure INITSIZE is not a power of two, as a general rule the lock table size will be the power of 2 that will result in the space division between lock table and lock list being as equal as possible.

And that wraps up this overview of Db2 lock structure sizing. I hope that the information will be useful for you.