The z/OS Workload Manager, for application priority control
WLM is nothing new -- it's been around for a long time. Still, there are people who seem to feel that WLM cannot be used for precision control of a DB2 client-server workload. What really galls me is the belief -- still, apparently, held by some individuals -- that all work processed through DDF executes at the priority of the DDF address space. That was once the case, long ago. What's true now, and has been for years, is that DDF's dispatching priority applies only to work done by what you might call the address space's "main" or "system" tasks -- and that work is likely to account for only a very small percentage of the total CPU consumption associated with a DB2 client-server workload (as I pointed out in a blog entry on this topic that I posted last year). The vast majority of DDF-related CPU consumption is directly tied to the execution of SQL statements executed via database access threads (DBATs), and THAT work can most definitely be controlled, in a dispatching priority sense, by way of a WLM policy. In fact, it's important to explicitly assign priorities to components of your DDF workload, because if you don't then your DB2 client-server transactions will execute, by default, not at the priority of the DDF address space but as "discretionary" work. That's a low priority, and not at all good for performance in a busy system (and z/OS LPARs are commonly quite busy, typically running with relatively high levels of CPU utilization).
If you have a "mixed" DDF application environment, with some high-volume, structured transactions, some lower-volume, less-structured decision-support queries, some long-running report-generating processes, etc., can you distinguish these different types of work from a z/OS dispatching priority perspective? Of course you can. Through a WLM policy you have the ability to set up multiple service classes for your DB2 client-server workload, and to map various components of the workload to appropriate service classes using a variety of identifiers. These identifiers, which can be provided by client information APIs or through client-side data source specifications, include -- but are not limited to -- the following:
- Accounting information
- Collection name (i.e., the name of the collection for the first package used by a DRDA requester in a unit of work)
- Stored procedure name (the name of the stored procedure initially called in a unit of work)
- Application process name
- User ID (the primary DB2 authorization ID associated with an application)
A broad range of WLM options are available for managing DDF workload prioritization, including "period aging" for longer-running processes. A really good write-up of DB2 client-server application priority control using WLM can be found in the IBM "red book" titled, "DB2 9 for z/OS: Distributed Functions" (this information is applicable to DB2 10 systems, as well). Check out section 3.3.2 in this document: "Managing DDF work with WLM."
Profiles for application-level control of DB2 connection resources
Some mainframe DB2 people have wished for a while for a more granular, server-side means of managing client connections to a DB2 for z/OS system -- something more refined than the ZPARM parameters CONDBAT (maximum connections to a DB2 subsystem from network-attached clients), MAXDBAT (maximum number of concurrently active database access threads for a DB2 subsystem), and IDTHTOIN (the maximum amount of time that an active DBAT can be idle before being cancelled). Those folks got what they wanted when, with DB2 10, the two tables SYSIBM.DSN_PROFILE_TABLE and SYSIBM.DSN_PROFILE_ATTRIBUTES, originally introduced with DB2 9 for SQL statement monitoring purposes, were made the enablers of an application-level connection management capability. Using these tables, one can define multiple scopes of DB2 connection control, based on criteria such as the IP address of an application server, the DB2 authorization ID or role associated with an application, or the name of a collection or a package used by an application. For a client-server workload component identified by one of these criteria, one can define and manage controls in relation to several thresholds:
- The maximum number of connections (including both active and inactive connections) to the DB2 subsystem from an IP address or domain name.
- The maximum number of concurrently active DBATs used for a DDF workload component identified by IP address, authorization ID, role, collection, or client application name, among other filtering criteria.
- The active DBAT idle timeout limit for a DDF workload component (as for concurrent active DBAT limits, there are multiple criteria available for defining a particular workload component). Note that the specified active DBAT idle timeout limit can be greater or less than the subsystem-wide active DBAT idle timeout limit indicated by the value of IDTHTOIN in ZPARM.
For any of these connection control profiles, you can instruct DB2 to just inform you (via messages) when thresholds have been exceeded, or take action when limits are reached (e.g., reject new incoming remote connection requests from an IP address when the connection limit for that IP address has been reached). You can find a lot more information about managing DDF connections and threads using profile tables in the DB2 10 for z/OS Information Center.
The query repository, for access path and execution option control at the SQL statement level
As I mentioned in the part one entry that preceded this part two, DB2 client-server applications very often involve the execution of dynamic SQL statements. This fact presented some control challenges that were, formerly, particularly acute in situations in which the dynamic SQL statements in question could not be modified (think about client-server applications purchased from vendors, or queries generated by business intelligence tools). I say "formerly" because DB2 10 delivered a significant enhancement in the area of dynamic SQL statement control: the query repository (more formally known as the access path repository). That term refers to a set of new catalog tables that can be used for two important purposes: access path stabilization and statement execution option selectivity. In both cases, one begins by binding the individual SQL statement in question, using a DSN_USERQUERY_TABLE (the schema of which would be your DB2 authorization ID) and the new BIND QUERY command. With that action, information is inserted into the new SYSIBM.SYSQUERY catalog table. Two other new catalog tables SYSIBM.SYSQUERYPLAN and SYSIBM.SYSQUERYOPTS, can also be populated with information when you issue the BIND QUERY command, depending on what you want to do:
- Influence a query's access path using a hint, WITHOUT having to modify the query itself. I refer to this as "hintless hints," to distinguish the capability from the more traditional DB2 hint. The problem with the latter is that it often requires modification of the SQL statement in question to add a QUERYNO clause, and as I've noted that is frequently not possible. With DB2 10, a hint can be applied to a query based on a matching of that query's text with that of a query in the SYSQUERY catalog table; thus, no modification of the query for which you want the hint to apply is required. What's more, you can specify that a hint of this type is to apply based only on query text-matching, or on a combination of text-matching and a particular schema or collection or package.
- Specify certain execution options for a particular SQL statement. Sometimes, you want a particular dynamic SQL statement to be a candidate for query parallelization, without making ALL dynamic queries candidates for parallelization through a specification of ANY for the CDSSRDEF parameter in ZPARM. Similarly, you might want a REOPT option (such as ONCE or AUTO) to apply to a certain statement, versus all statements associated with a given package (a useful capability when a whole lot of dynamic statements are associated with, say, one of the IBM Data Server Driver or DB2 Connect packages). The query repository lets you exert that kind of specific control for query parallelization, reoptimization, and other execution options (such as those pertaining to star join processing).
The DB2 10 Information Center is a great source of additional information on creating statement-level optimization hints and parameters.
And there's more...
How about control over retention of certain resources needed for execution of statements associated with a given package? Prior to DB2 10, RELEASE(COMMIT) was the only option for packages executed through DDF (a package been bound with RELEASE(DEALLOCATE) would be treated as though bound with RELEASE(COMMIT), if executed via a DBAT). DB2 10 high-performance DBATs changed that. Do you run DB2 in data sharing mode? If so, do you want to control the data sharing group members on which requests from certain client-server applications can be processed? If you do, member subsetting is your ticket. Want even more control over the execution of a client-server workload in a DB2 data sharing system? Check out the IBM's InfoSphere Optim Configuration Manager.
Client-server applications are a major driver -- perhaps the major driver -- of DB2 for z/OS workload growth these days. Be a part of that story, and know that this is work that you CAN control.