I recently provided some assistance to an organization that was experiencing throughput issues in their CICS-DB2 online transaction environment. This company was migrating a large application from a non-IBM, non-relational DBMS to DB2 for z/OS, and the initial cut-over to DB2 placed a significant load on a mainframe system that didn't have a whole lot of spare cycles to begin with. The transaction slowdowns were most pronounced during periods of very high CPU utilization. In such situations, there are some mitigating actions that one can take if one has the time to see them through. For example, a business can add processing capacity to a CPU-constrained system, but the procurement cycle might be rather drawn out in the case of a large-scale, enterprise-class server. In the particular case I've described, application re-architecture work would likely yield major CPU savings, as phase one of the database migration project was not exploitative of the set-oriented nature of SQL (what we'd think of as a multi-row result set is presently being retrieved via a series of singleton SELECTs -- much more costly versus OPEN CURSOR/FETCH/FETCH/FETCH...). The application development leads at the company of which I'm writing are aware of this, but the SQL-exploiting phase of the application migration (as opposed to the "compatibility" phase) will take a while to implement.
In the meantime, there's this response time problem that needs attention now. What can be done quickly? A clue as to the appropriate performance tuning action came from side-by-side analysis of CICS and DB2 monitor information. The CICS monitor was reporting elevated wait-for-DB2 times during periods of degraded throughput, while at the same time the DB2 monitor was showing very good performance. Such a situation is often indicative of a problem in between CICS and DB2, and that is likely to have something to do with CICS transactions waiting to get the threads that are necessary for DB2 data access. The CICS monitor sees wait-for-thread time as wait-for-DB2 time, whereas a DB2 monitor doesn't "see" the CICS transaction until it gets a thread (and things might move along lickety-split once a thread is acquired -- thus the picture of good performance presented by the DB2 monitor).
Thankfully, for a problem of this nature there is a corrective action that both cuts down on DB2 thread create/terminate activity AND boosts CPU efficiency. It's called thread reuse. That's a relatively old concept, but it gets lost in the shuffle sometimes and isn't given much thought until a situation arises in which it's needed. Thread reuse has relevance in a transaction processing context, as it involves multiple transactions (i.e., multiple units of work, with the ratio of transactions to DB2 units of work typically being 1:1) accessing DB2 data, one after another, using the same DB2 thread; so, that thread which is reused persists across commits, as opposed to being terminated at end-of-transaction (the thread used by a batch job to access DB2 data is by its nature persistent until end-of-job: the batch job might commit ten thousand times in the course of its execution, but it will keep using the same thread).
When the DB2-accessing transactions are managed by CICS, there are basically two ways to drive up the rate of thread reuse. The first of these, which I don't much like, requires restricting the number of threads available between a CICS region and a DB2 subsystem. That restriction on the supply of threads is needed because this approach depends on transaction queuing to drive thread reuse. Here's what I mean by that: typically, when a CICS-DB2 transaction associated with DB2 plan ABC completes, the thread used for the execution of the transaction will be immediately terminated unless it is reused. The thread will be reused IF a transaction associated with the same plan is queued, waiting for a thread (that queued transaction will also have to be associated with the same CICS DB2ENTRY -- if any -- as the transaction that last used the thread). Wait-for-thread transaction queuing will only happen if the number of threads between the CICS region and the DB2 subsystem has reached its user-specified limit. In that case, new threads can't be created, so incoming transactions wait until in-use threads are freed up for reuse. [Note: I am using CICS-DB2 interface specification terms, such as DB2ENTRY, that are associated with CICS's Resource Definition Online (aka RDO) functionality. Some years ago, RDO replaced the old way of defining the interface between a CICS application-owning region (AOR) and a DB2 subsystem: a macro called the RCT (short for Resource Control Table). Information on DB2ENTRY, DB2CONN, and other DB2-related CICS resource definitions can be found in a manual called the CICS Transaction Server for z/OS DB2 Guide.]
Because I don't like to see CICS transactions queued up waiting for DB2 threads to come free, I favor an alternative means of promoting thread reuse: exploiting protected entry threads. I like this approach because it doesn't depend on forcing wait-for-thread transaction queuing. Unlike non-protected threads (those being pool threads or non-protected entry threads), a protected thread (a type of entry thread) will stick around for an average of 45 seconds following completion of the transaction that last used the thread (this time period is based on the setting of the CICS-DB2 thread purge cycle, which has a default value of 30 seconds -- a protected thread will be terminated if it is not reused within two of these purge cycles). If a transaction associated with the same DB2ENTRY and the same DB2 plan comes along in that 45-second period, the protected thread will be reused (keep in mind that several transactions can be defined for one DB2ENTRY by way of a wildcard character in the transaction name, and additional transactions can be associated with a DB2ENTRY via a DB2TRAN resource definition).
The approach to protected thread utilization that I recommend involves starting with the most frequently executed CICS-DB2 transactions. Set up a DB2ENTRY (or DB2ENTRYs) for these, with a THREADLIMIT value that is greater than zero and a PROTECTNUM value that is equal to the THREADLIMIT value (you could make PROTECTNUM smaller than THREADLIMIT, but I don't see much value in having entry threads that aren't protected). Also specify THREADWAIT(POOL) for the DB2ENTRY(s), so that transactions will overflow to the pool instead of waiting if the thread limit for the DB2ENTRY has been reached. Start with a moderate number of protected threads for an entry, and monitor the impact on thread reuse. If the thread reuse rate is not what you want it to be, increase the number of protected threads for one or more DB2ENTRYs. Note that in adding entry threads (whether protected or not) you may need to increase the value of TCBLIMIT for the CICS region's DB2CONN resource definition -- this to help ensure that you don't unintentionally hit the TCBLIMIT on the number of tasks using DB2 threads between the CICS region and the target DB2 subsystem (example: specifying 100 entry threads for a CICS region won't help you if the value of TCBLIMIT for the region is 50).
So, how do you monitor CICS-DB2 thread reuse in your DB2 environment? I like to use the accounting detail report generated by a DB2 monitor, with the data in the report grouped either by connection type (so that you'll see a report reflecting activity for your overall CICS-DB2 workload) or by connection ID (this would show activity at the CICS AOR level). [Be aware that, based on the particular DB2 monitor in use at your site, what I call an accounting detail report might be referred to as an accounting long report, and what I call data grouping (a SYSIN-specified option in the batch job that generates the report) may be termed data ordering.]
In the accounting detail report, look for a set of fields under the heading NORMAL TERM (or something like that -- headings and field names differ somewhat from one monitor to another). This is where thread reuse data is found. The field DEALLOCATION under the heading NORMAL TERM shows the number of times that a thread was deallocated (i.e., not reused). NEW USER indicates the number of times that a CICS-DB2 thread was reused with a different authorization ID. The RESIGNON field under the NORMAL TERM heading shows the number of times that a thread was reused without an authorization ID change (DB2CONN and DB2ENTRY resource definitions specify the DB2 authorization ID to be used for transactions, and this can optionally be set to a non-changing value, such as a character string or the ID of the CICS region). The rate of thread reuse is:
(NEW USER + RESIGNON) / (NEW USER + RESIGNON + DEALLOCATION)
At one DB2 for z/OS-using organization that I support, this rate is a little over 99%.
You can also use a DB2 monitor statistics detail report (your monitor might refer to this as a statistics long report) to check on the CPU consumption of the DB2 address spaces. A good bit of the work done by the DB2 system services address space (also known as MSTR) has to do with thread creation and termination. As your rate of thread reuse goes up, you might see MSTR CPU consumption go down (though this isn't a really big deal, as MSTR usually consumes only a small amount of a system's processing resource). Rising thread reuse also reduces the average CPU cost for CICS-DB2 transactions (this would be shown as average class 2 CPU time -- also known as in-DB2 CPU time -- per transaction in an accounting detail report of CICS-DB2 activity).
Want to really maximize the CPU savings achievable with greater reuse of CICS-DB2 threads? Bind DB2 packages executed frequently by thread-reusing transactions with RELEASE(DEALLOCATE). That will cause DB2 to retain table space-level locks (these are almost always IS or IX locks, which are not exclusive) and EDM pool entries needed for the execution of the packages until threads through which the packages are executed are deallocated, as opposed to releasing these resources at commit (i.e., at end of transaction) and then reacquiring them when the packages are executed again. The combination of thread reuse and RELEASE(DEALLOCATE), while good for CPU efficiency, will increase utilization of the part of the EDM pool in which package table (PT) sections go (a statistics detail report generated by your DB2 monitor might refer to this as the RDS pool, under the heading EDM POOL). Keep an eye on this, and if the number of free pages in the RDS part of the EDM pool drops below 10% of the total number of pages used for that part of the pool, add pages (you can up the value of the EDMPOOL parameter in ZPARM, and activate the change using the DB2 command -SET SYSPARM). DB2 10, by the way, moves PT sections above the 2 GB bar (for packages bound and rebound on the DB2 10 system), giving you much more room, virtual storage-wise, for enlarging the EDM pool.
Driving thread reuse is not just a CICS-DB2 thing. DB2 10 made it a DDF thing, too, thanks to high performance DBATs. I blogged about that DB2 10 enhancement a few months ago.
And what about that organization I mentioned at the start of this entry -- the one where CICS-DB2 transaction throughput degraded during times of very high CPU utilization? They set up some protected entry threads, and CICS-DB2 thread reuse went from zero to a much higher percentage, with attendant performance benefits. Look into this at your shop, if you're not already using protected entry threads. Your efforts could yield a nice performance payoff.
This is the blog of Robert Catterall, an IBM Db2 for z/OS specialist. The opinions expressed herein are the author's, and should not be construed as reflecting official positions of the IBM Corporation.
Tuesday, December 13, 2011
Wednesday, November 23, 2011
DB2 Data Sharing: What do you Know about Member Subsetting?
DB2 for z/OS data sharing is great technology. One of the things that makes it great is the ability of a data sharing system to automatically spread work across DB2 members so as to optimize load balancing, throughput, and availability. For this we can thank the z/OS workload manager (WLM), the DB2 group attach capability (for local-to-DB2 applications such as batch programs and utilities), and the Sysplex awareness built into DB2 Connect, the IBM Data Server Driver, and other DB2 clients (for remote data requesters).
Nice as it is to let the system spread work around the Parallel Sysplex, there are times when you'd like a particular workload to run on a subset of the members of the data sharing group. Here's a real-world example from a DB2-using organization that I support: the company has a business intelligence, or BI, workload (fairly complex data analysis queries) that targets tables also accessed by "run the business" OLTP and batch programs. To keep the BI queries from interfering with the highly performance-sensitive operational application programs, the organization restricted the analytic workload to a single member of their 6-way data sharing group. This workload isolation was easily implemented: the BI queries were issued by remote requesters, and the client systems connected to the one DB2 member to which the analytic SQL statements were restricted, versus connecting via the location name of the data sharing group (connecting to the group's location name causes requests to be spread across all of the group's DB2 members).
All well and good -- to a point. Yes, the high-volume online and batch programs with stringent performance requirements were insulated from the more dynamic BI application workload, but the analytics users were not protected from system outages as were the other users of the DB2 data sharing system. Here's what I mean by that statement: suppose that DBS1 (a made-up name) were the subsystem to which users of the BI application were required to connect. If that subsystem were to be unavailable for some reason, the BI users would get connection failures and might be out of luck for a while. In contrast to that situation, the operational application users (whether connecting locally via the DB2 group attach name, or remotely over the network using the group's location name) were assured of successful connections as long as any of the member DB2 subsystems were available (thanks, in the network-attached case, to the use of dynamic virtual IP addressing, which I briefly described in a blog entry from my independent DB2 consulting days, and about which you can read more in the IBM "red book" titled "DB2 9 for z/OS: Distributed Functions" -- most all of which is applicable to DB2 10, as well).
So, how to keep a given application workload from running on all members of a DB2 data sharing group without taking from that application's users the super-high-availability benefits conferred by DB2's shared-data technology? The solution to that challenge is something called member subsetting, and it was introduced -- for network-attached requesters -- with DB2 Version 8 (I'll cover member subsetting for locally-attached applications momentarily). Implementation of member subsetting for DRDA requesters involves creating one or more location aliases that map to subsets of the DB2 subsystems that comprise the data sharing group. For example, suppose that application ABC is to be restricted to members DBP1 and DBP2 of a 5-way data sharing group that has a location name of LOCDB2Y. In that case, a location alias associated only with members DBP1 and DBP2 could be created. Let's say that the location alias so defined is named LOCDB2X. If application ABC connects to location LOCDB2X, SQL statements issued through the application will execute only on subsystems DBP1 and DBP2 (there's your workload isolation); further, requests to connect to location alias LOCDB2X will succeed as long as one of the two associated subsystems is available -- so, if DBP1 is down due to a failure situation or a planned outage (perhaps a software maintenance upgrade), LOCDB2X connection requests will be handled by subsystem DBP2 (there's your high-availability set-up). Pretty sweet.
Setting up a location alias in a DB2 Version 8 or DB2 9 environment involves updating the communication record in the DB2 bootstrap data set (BSDS). That's done by executing the DB2 change log inventory utility (aka DSNJU003) with an input statement that would look something like this (referring to names I've used in my example, this statement would be executed via DSNJU003 on member DBP1 -- the statement for member DBP2 would look the same, but with a different RESPORT value):
The location alias would have the same IP address as the group's location name. All members of the group listen on port 1237, but only members DBP1 and DBP2 would listen for requests directed to port 8002. Sysplex workload balancing for the subsystems to which location alias LOCDB2X works as it does for the overall group, except that the list of available subsystems returned to the DB2 client (e.g., the IBM Data Server or DB2 Connect), sorted in descending order of processing capacity as assessed by WLM, includes only DBP1 and DBP2.
Very good stuff, here, with one hitch: DSNJU003 can only be executed when DB2 is down. DB2 10 delivered an online means of defining a location alias via the new command -MODIFY DDF. Again using the names referred to previously, a location alias mapping to subsystem DBP1 would be dynamically created through the issuance of this command on DBP1 (the command would also be issued on DBP2 to map the alias to that subsystem):
-MODIFY DDF ALIAS(LOCDB2X) ADD
The -MODIFY DDF command would be executed again to specify a port number for LOCDB2X -- one option pertaining to a location alias can be specified per issuance of the command. Dynamically added location aliases are initially in a stopped state by default, so the last command issued to activate the fully-defined location alias would be (and again, this command would be issued on each subsystem associated with the alias):
-MODIFY DDF ALIAS(LOCDB2X) START
And, the story keeps getting better: DB2 10 provided a way to implement member subsetting for locally-attached applications (those being applications, such as batch jobs, that run on the same Parallel Sysplex as the DB2 data sharing members). This DB2 10 feature is called subgroup attach. A subgroup attach name for a DB2 10 subsystem can be specified via the SUBGRP ATTACH field of installation CLIST panel DSNTIPK. It can also be added by modifying the IEFSSNxx member of SYS1.PARMLIB and the IPLing the associated z/OS LPAR (details can be found in the DB2 10 Installation and Migration Guide, available online at http://www-01.ibm.com/support/docview.wss?uid=swg27019288#manuals).
So, if your organization has a DB2 data sharing group and you want to achieve both application workload isolation AND super-high availability for users of said application, implement member subsetting via a location alias or a subgroup attach. It's like having your cake and eating it, too.
Nice as it is to let the system spread work around the Parallel Sysplex, there are times when you'd like a particular workload to run on a subset of the members of the data sharing group. Here's a real-world example from a DB2-using organization that I support: the company has a business intelligence, or BI, workload (fairly complex data analysis queries) that targets tables also accessed by "run the business" OLTP and batch programs. To keep the BI queries from interfering with the highly performance-sensitive operational application programs, the organization restricted the analytic workload to a single member of their 6-way data sharing group. This workload isolation was easily implemented: the BI queries were issued by remote requesters, and the client systems connected to the one DB2 member to which the analytic SQL statements were restricted, versus connecting via the location name of the data sharing group (connecting to the group's location name causes requests to be spread across all of the group's DB2 members).
All well and good -- to a point. Yes, the high-volume online and batch programs with stringent performance requirements were insulated from the more dynamic BI application workload, but the analytics users were not protected from system outages as were the other users of the DB2 data sharing system. Here's what I mean by that statement: suppose that DBS1 (a made-up name) were the subsystem to which users of the BI application were required to connect. If that subsystem were to be unavailable for some reason, the BI users would get connection failures and might be out of luck for a while. In contrast to that situation, the operational application users (whether connecting locally via the DB2 group attach name, or remotely over the network using the group's location name) were assured of successful connections as long as any of the member DB2 subsystems were available (thanks, in the network-attached case, to the use of dynamic virtual IP addressing, which I briefly described in a blog entry from my independent DB2 consulting days, and about which you can read more in the IBM "red book" titled "DB2 9 for z/OS: Distributed Functions" -- most all of which is applicable to DB2 10, as well).
So, how to keep a given application workload from running on all members of a DB2 data sharing group without taking from that application's users the super-high-availability benefits conferred by DB2's shared-data technology? The solution to that challenge is something called member subsetting, and it was introduced -- for network-attached requesters -- with DB2 Version 8 (I'll cover member subsetting for locally-attached applications momentarily). Implementation of member subsetting for DRDA requesters involves creating one or more location aliases that map to subsets of the DB2 subsystems that comprise the data sharing group. For example, suppose that application ABC is to be restricted to members DBP1 and DBP2 of a 5-way data sharing group that has a location name of LOCDB2Y. In that case, a location alias associated only with members DBP1 and DBP2 could be created. Let's say that the location alias so defined is named LOCDB2X. If application ABC connects to location LOCDB2X, SQL statements issued through the application will execute only on subsystems DBP1 and DBP2 (there's your workload isolation); further, requests to connect to location alias LOCDB2X will succeed as long as one of the two associated subsystems is available -- so, if DBP1 is down due to a failure situation or a planned outage (perhaps a software maintenance upgrade), LOCDB2X connection requests will be handled by subsystem DBP2 (there's your high-availability set-up). Pretty sweet.
Setting up a location alias in a DB2 Version 8 or DB2 9 environment involves updating the communication record in the DB2 bootstrap data set (BSDS). That's done by executing the DB2 change log inventory utility (aka DSNJU003) with an input statement that would look something like this (referring to names I've used in my example, this statement would be executed via DSNJU003 on member DBP1 -- the statement for member DBP2 would look the same, but with a different RESPORT value):
DDF LOCATION=LOCDB2Y,PORT=1237,RESPORT=1238,ALIAS=LOCDB2X:8002
The location alias would have the same IP address as the group's location name. All members of the group listen on port 1237, but only members DBP1 and DBP2 would listen for requests directed to port 8002. Sysplex workload balancing for the subsystems to which location alias LOCDB2X works as it does for the overall group, except that the list of available subsystems returned to the DB2 client (e.g., the IBM Data Server or DB2 Connect), sorted in descending order of processing capacity as assessed by WLM, includes only DBP1 and DBP2.
Very good stuff, here, with one hitch: DSNJU003 can only be executed when DB2 is down. DB2 10 delivered an online means of defining a location alias via the new command -MODIFY DDF. Again using the names referred to previously, a location alias mapping to subsystem DBP1 would be dynamically created through the issuance of this command on DBP1 (the command would also be issued on DBP2 to map the alias to that subsystem):
-MODIFY DDF ALIAS(LOCDB2X) ADD
The -MODIFY DDF command would be executed again to specify a port number for LOCDB2X -- one option pertaining to a location alias can be specified per issuance of the command. Dynamically added location aliases are initially in a stopped state by default, so the last command issued to activate the fully-defined location alias would be (and again, this command would be issued on each subsystem associated with the alias):
-MODIFY DDF ALIAS(LOCDB2X) START
And, the story keeps getting better: DB2 10 provided a way to implement member subsetting for locally-attached applications (those being applications, such as batch jobs, that run on the same Parallel Sysplex as the DB2 data sharing members). This DB2 10 feature is called subgroup attach. A subgroup attach name for a DB2 10 subsystem can be specified via the SUBGRP ATTACH field of installation CLIST panel DSNTIPK. It can also be added by modifying the IEFSSNxx member of SYS1.PARMLIB and the IPLing the associated z/OS LPAR (details can be found in the DB2 10 Installation and Migration Guide, available online at http://www-01.ibm.com/support/docview.wss?uid=swg27019288#manuals).
So, if your organization has a DB2 data sharing group and you want to achieve both application workload isolation AND super-high availability for users of said application, implement member subsetting via a location alias or a subgroup attach. It's like having your cake and eating it, too.
Wednesday, November 2, 2011
IOD Dispatch (3) - Gleanings from the DB2 for z/OS "Coffee Track"
IBM's 2011 Information on Demand conference took place last week in Las Vegas. I attended a number of the event's technical sessions, and summarized what I heard in some of those sessions in a couple of entries posted to this blog. In this entry, my last focused on IOD 2011, I'll provide some of the more interesting items of information that I picked up in the "coffee track" -- that being the term often used to describe discussions that take place during coffee breaks and otherwise outside of the formal presentations. I don't know about you, but the "coffee track" has always been for me one of the most valuable aspects of conferences such as IOD. I really enjoy informal conversations with IBM DB2 developers and with professionals in the DB2 user community. Even when these talks are brief, they often result in my seeing a DB2 feature or function in a new light, and thus my understanding of DB2 technology is broadened. So, without further ado, here are some of the highlights from my "coffee track" time last week:
A lot of mainframe DB2 people still have a lot to learn about native SQL procedures. On Tuesday, I delivered a presentation on DB2 for z/OS stored procedures. During that hour I spent a lot of time talking about native SQL procedures, and some related questions that I got from a session attendee were illuminating. Native SQL procedures, by the way, were introduced with DB2 9 in new-function mode (I blogged about them a few years ago, when DB2 9 was new and I was working as an independent DB2 consultant). The functionality enables the development of stored procedures that are written entirely in SQL, and unlike external SQL procedures (introduced with DB2 Version 7), which are turned into C language programs that run in a stored procedure address space, native SQL procedures run in the DB2 database services address space and have no associated external-to-DB2 executable. Anyway, the aforementioned session attendee asked me about deployment of native SQL procedures from one DB2 environment to another (e.g., from test to production). My answer didn't quite hit the mark, and the question was asked again in a slightly different form. We went back and forth in this way for a while, until the session attendee finally asked, "With native SQL procedures, there's just one 'piece', right?" Right.
With the phrase "one piece," this person was contrasting native SQL procedures with traditional external stored procedures, which have two "pieces": an external-to-DB2 program written in a language such as COBOL or C or Java, and that program's DB2 package. In that application environment (one with which many mainframe DB2 people are very familiar), stored procedure deployment is mostly about deployment of the external-to-DB2 program, and the DB2 "piece" -- the package -- is dragged along in that process. In the case of COBOL stored procedures, for example, the external-to-DB2 executable exists in the form of a load module. Organizations have procedures, often involving the use of software tools, that are followed to get a load module deployed into a given system, and these procedures ensure that the associated DB2 package is also made available in the target system. But what if the only "piece" of a stored procedure is the DB2 package? If the DB2 package has previously been "along for the ride" as an external-to-DB2 program is deployed in, say, the production system, what's to be done when the package is in the driver's seat, so to speak, and the car (to continue the vehicular analogy) is otherwise empty (i.e., there's not a second "piece" to the stored procedure)? That is indeed something new for DB2 people, and dealing with that situation means learning about the new (with DB2 9) DEPLOY option of the BIND PACKAGE command, and about extensions to the ALTER PROCEDURE statement, such as the ACTIVATE VERSION option (useful for "backing out" a version of a native SQL procedure that is causing errors). So, if you're looking to take advantage of native SQL procedure functionality (and there can be multiple benefits of doing so), get familiar not only with development of these procedures, but with deployment, as well. I have some introductory information in an entry I posted a couple of years ago to the blog I maintained while working as an independent consultant, and you can find more detailed information in the DB2 Command Reference (for BIND PACKAGE) and the DB2 SQL Reference (for ALTER PROCEDURE) -- both the DB2 9 and the DB2 10 manuals are available online at IBM's Web site.
DB2 for z/OS for data warehousing -- when the application "cannot fail." I had a talk with a couple of IOD attendees from a large company. Their organization had recently spent months in developing go-forward plans for a mission-critical business intelligence application. The basic decision was this: leave the application's database on the mainframe DB2 platform, where it had been for some time, or move it to a non-mainframe, non-DB2 system. The company chose to stay with DB2 for z/OS, mainly for two reasons: 1) they are confident that DB2 on System z can scale to handle the big increase in database size and access activity expected over the coming years, and 2) they place a high value on the rock-solid reliability of the mainframe DB2 platform (as one of the people with whom I spoke put it: "The [decision support application] database cannot fail"). In expanding on that second point, these folks went beyond System z's and DB2's well-earned reputation for high availability, and talked up the excellence of their DB2 for z/OS support staff -- systems programmers and DBAs.
Sometimes, we take for granted the deep expertise possessed by a lot of companies' mainframe DB2 teams. DB2 for z/OS has been around for more than 25 years (IBM invented relational database technology), and many DB2 for z/OS professionals have two decades or more of experience in working with the product. They know how to configure DB2 systems and design DB2 databases for high availability, and they know what to do if a data recovery situation arises. On top of that, these teams tend to have have processes, developed over a number of years, around things such as change management, capacity planning, and performance monitoring and tuning, that are very robust and very effective. Combine a super-reliable hardware/software platform with a veteran and highly knowledgeable support staff, and you get a data-serving system that organizational leaders trust to deliver information as needed, whenever it's needed.
DB2 10 for z/OS: it's about time. There was a lot of buzz at IOD 2011 about the temporal data support provided by DB2 10 for z/OS, and with good reason: this is functionality that would be a bear to provide via your own application code (something I know from first-hand experience), and even if you went that route you couldn't match the performance of the built-in DB2 capability.
There are two flavors of temporal data support available in a DB2 10 new-function mode environment (and they can be employed separately or together for a given table): system time (this has to do with automatic saving of "before" images of rows targeted by UPDATE and DELETE operations in a history table, with information indicating the timestamps between which a row was in the "current" versus the "history" state) and business time (this enables, among other things, a "future insert" capability, whereby a row with, for example, next year's price for a product can be added to a table without affecting programs querying "currently valid" rows).
I appreciated the value of system time temporal data support as soon as I learned of it, having done, in the mid-1990s, some consulting work for an insurance company that had written "row change history" logic into their application code -- this to enable determination of a policy holder's coverage at the time that an automobile accident (for example) occurred. With regard to business time, however, I'll admit that I initially had some trouble envisioning use cases. I heard of an interesting one at the conference last week: a company is utilizing DB2 10 business time to enable analysts to do future forecasting of profit margins. Very cool. I expect that as time goes by, organizations will come up with new and innovative ways to derive value from the system time and business time capabilities of DB2 10.
I had a great time last week, and I'm already looking forward to IOD 2012 -- hope to see you there.
A lot of mainframe DB2 people still have a lot to learn about native SQL procedures. On Tuesday, I delivered a presentation on DB2 for z/OS stored procedures. During that hour I spent a lot of time talking about native SQL procedures, and some related questions that I got from a session attendee were illuminating. Native SQL procedures, by the way, were introduced with DB2 9 in new-function mode (I blogged about them a few years ago, when DB2 9 was new and I was working as an independent DB2 consultant). The functionality enables the development of stored procedures that are written entirely in SQL, and unlike external SQL procedures (introduced with DB2 Version 7), which are turned into C language programs that run in a stored procedure address space, native SQL procedures run in the DB2 database services address space and have no associated external-to-DB2 executable. Anyway, the aforementioned session attendee asked me about deployment of native SQL procedures from one DB2 environment to another (e.g., from test to production). My answer didn't quite hit the mark, and the question was asked again in a slightly different form. We went back and forth in this way for a while, until the session attendee finally asked, "With native SQL procedures, there's just one 'piece', right?" Right.
With the phrase "one piece," this person was contrasting native SQL procedures with traditional external stored procedures, which have two "pieces": an external-to-DB2 program written in a language such as COBOL or C or Java, and that program's DB2 package. In that application environment (one with which many mainframe DB2 people are very familiar), stored procedure deployment is mostly about deployment of the external-to-DB2 program, and the DB2 "piece" -- the package -- is dragged along in that process. In the case of COBOL stored procedures, for example, the external-to-DB2 executable exists in the form of a load module. Organizations have procedures, often involving the use of software tools, that are followed to get a load module deployed into a given system, and these procedures ensure that the associated DB2 package is also made available in the target system. But what if the only "piece" of a stored procedure is the DB2 package? If the DB2 package has previously been "along for the ride" as an external-to-DB2 program is deployed in, say, the production system, what's to be done when the package is in the driver's seat, so to speak, and the car (to continue the vehicular analogy) is otherwise empty (i.e., there's not a second "piece" to the stored procedure)? That is indeed something new for DB2 people, and dealing with that situation means learning about the new (with DB2 9) DEPLOY option of the BIND PACKAGE command, and about extensions to the ALTER PROCEDURE statement, such as the ACTIVATE VERSION option (useful for "backing out" a version of a native SQL procedure that is causing errors). So, if you're looking to take advantage of native SQL procedure functionality (and there can be multiple benefits of doing so), get familiar not only with development of these procedures, but with deployment, as well. I have some introductory information in an entry I posted a couple of years ago to the blog I maintained while working as an independent consultant, and you can find more detailed information in the DB2 Command Reference (for BIND PACKAGE) and the DB2 SQL Reference (for ALTER PROCEDURE) -- both the DB2 9 and the DB2 10 manuals are available online at IBM's Web site.
DB2 for z/OS for data warehousing -- when the application "cannot fail." I had a talk with a couple of IOD attendees from a large company. Their organization had recently spent months in developing go-forward plans for a mission-critical business intelligence application. The basic decision was this: leave the application's database on the mainframe DB2 platform, where it had been for some time, or move it to a non-mainframe, non-DB2 system. The company chose to stay with DB2 for z/OS, mainly for two reasons: 1) they are confident that DB2 on System z can scale to handle the big increase in database size and access activity expected over the coming years, and 2) they place a high value on the rock-solid reliability of the mainframe DB2 platform (as one of the people with whom I spoke put it: "The [decision support application] database cannot fail"). In expanding on that second point, these folks went beyond System z's and DB2's well-earned reputation for high availability, and talked up the excellence of their DB2 for z/OS support staff -- systems programmers and DBAs.
Sometimes, we take for granted the deep expertise possessed by a lot of companies' mainframe DB2 teams. DB2 for z/OS has been around for more than 25 years (IBM invented relational database technology), and many DB2 for z/OS professionals have two decades or more of experience in working with the product. They know how to configure DB2 systems and design DB2 databases for high availability, and they know what to do if a data recovery situation arises. On top of that, these teams tend to have have processes, developed over a number of years, around things such as change management, capacity planning, and performance monitoring and tuning, that are very robust and very effective. Combine a super-reliable hardware/software platform with a veteran and highly knowledgeable support staff, and you get a data-serving system that organizational leaders trust to deliver information as needed, whenever it's needed.
DB2 10 for z/OS: it's about time. There was a lot of buzz at IOD 2011 about the temporal data support provided by DB2 10 for z/OS, and with good reason: this is functionality that would be a bear to provide via your own application code (something I know from first-hand experience), and even if you went that route you couldn't match the performance of the built-in DB2 capability.
There are two flavors of temporal data support available in a DB2 10 new-function mode environment (and they can be employed separately or together for a given table): system time (this has to do with automatic saving of "before" images of rows targeted by UPDATE and DELETE operations in a history table, with information indicating the timestamps between which a row was in the "current" versus the "history" state) and business time (this enables, among other things, a "future insert" capability, whereby a row with, for example, next year's price for a product can be added to a table without affecting programs querying "currently valid" rows).
I appreciated the value of system time temporal data support as soon as I learned of it, having done, in the mid-1990s, some consulting work for an insurance company that had written "row change history" logic into their application code -- this to enable determination of a policy holder's coverage at the time that an automobile accident (for example) occurred. With regard to business time, however, I'll admit that I initially had some trouble envisioning use cases. I heard of an interesting one at the conference last week: a company is utilizing DB2 10 business time to enable analysts to do future forecasting of profit margins. Very cool. I expect that as time goes by, organizations will come up with new and innovative ways to derive value from the system time and business time capabilities of DB2 10.
I had a great time last week, and I'm already looking forward to IOD 2012 -- hope to see you there.
Friday, October 28, 2011
IOD Dispatch (2) - DB2 9 and 10 for z/OS SQL, and DB2 Connect Set-Up
In this, my second dispatch from the 2011 IBM Information on Demand conference, more items of information from sessions attended.
SQL enhancements delivered in DB2 for z/OS Versions 9 and 10 - This presentation was given by Chris Crone, a Distinguished Engineer with the DB2 for z/OS development organization at IBM's Silicon Valley Lab. Among the DB2 9 SQL enhancements covered by Chris in the session were the following:
In the second part of his presentation, Chris described SQL enhancements delivered in DB2 10 for z/OS, including the following:
Optimizing DB2 Connect deployments - This session was delivered by Brent Gross, a member of the DB2 for Linux, UNIX, and Windows development team at IBM's Toronto Lab. Brent noted, among other things, that with respect to DB2 clients, you want to go with "the lightest, smallest package possible." He recommends the Data Server Driver Package (aka the DS Driver) in most cases.
Brent also mentioned that a direct DB2 client to DB2 for z/OS server connection is preferred, in most cases, to a set-up that has DB2 clients accessing DB2 for z/OS through a DB2 Connect gateway. The main benefits of direct-to-DB2 connections are 1) simplified network topology and 2) improved performance. Regarding that second point, Brent said that elapsed time improvements of 15-30% have been observed when direct connection of clients to DB2 for z/OS has replaced a set-up that had client-to-DB2 connections going through a DB2 Connect gateway (OLTP workloads see the greatest performance gains).
Brent pointed out that there are some situations in which client connections to a DB2 for z/OS server have to go through a DB2 Connect gateway:
Also in support of the general recommendation that DB2 clients connect directly to a DB2 for z/OS server, Brent pointed out that this approach carries with it no penalties with regard to functionality versus the DB2 Connect gateway alternative (for example, sysplex workload balancing, available for years for Java applications directly connected to DB2, was made available for .NET and ODBC and CLI clients with version 9.5 Fix Pack 3 of the DB2 client code).
For organizations looking to migrate from a DB2 Connect gateway to a direct client-to-DB2 set-up, Brent recommended starting with application servers (versus "fat client" workstations).
A few other items of information provided by Brent during the session:
SQL enhancements delivered in DB2 for z/OS Versions 9 and 10 - This presentation was given by Chris Crone, a Distinguished Engineer with the DB2 for z/OS development organization at IBM's Silicon Valley Lab. Among the DB2 9 SQL enhancements covered by Chris in the session were the following:
- INTERSECT and EXCEPT - Given two query result sets that line up in terms of number of columns and data types of columns (i.e., the data type of the nth column of result set 1 is compatible with the data type of the nth column of result set 2), the INTERSECT set operator makes it easy to write a SELECT statement that will return only those rows that appear in both result set 1 and result set 2. EXCEPT is the ticket when you want the SELECT statement to return rows from result set 1 that do not appear in result set 2 (or vice versa).
- INSTEAD OF triggers - I liked Chris's succinct description of this DB2 9 enhancement: "These are triggers on views." Basically, an INSTEAD OF trigger can be used to enable the updating of data through a view, when the data-change operation in question (INSERT, UPDATE, or DELETE) would be technically impossible or practically unfeasible in the absence of the INSTEAD OF trigger. For example, an INSTEAD OF UPDATE trigger could be used to change an employee's last name through an UPDATE targeting a view that joins the EMPLOYEE table to the DEPARTMENT table (this to bring in from the DEPARTMENT table the name of a department referenced only by department number in the EMPLOYEE table). Such a view is read-only by definition, but the INSTEAD OF trigger enables the UPDATE targeting the view to execute successfully, by changing the action to an update of the EMPLOYEE table.
- MERGE - Using this statement, one can change data in a target table based on a comparison of target table rows with "rows" in a virtual "input table" ("input table" content is provided via array variables -- one for each "column" of the "input table"). When an "input table" row matches one or more rows in the target table (per matching criteria specified in the MERGE statement), the target table row (or rows) is updated with values in the matching "input table" row. When there is no target table match for an "input table" row, that row is inserted into the target table.
- SELECT FROM UPDATE (and FROM DELETE and FROM MERGE) - This enhancement rounds out the SELECT FROM INSERT capability introduced with DB2 Version 8. The same concept applies: with one statement (versus separate SELECT and data-change statements), change data in a table and retrieve information about the data change operation. Chris showed how an INCLUDE column (a column, defined and assigned values via a SELECT FROM UPDATE/DELETE/MERGE statement) could be used to determine whether rows in the "input table" of a MERGE statement (see my description of MERGE, above) were inserted into the target table or were used to update target table rows.
- TRUNCATE - This is a statement that can be used to clear data out of a table. Why use TRUNCATE versus a mass delete (that being a DELETE statement with no WHERE clause)? Here's one reason: TRUNCATE can be used to empty a table without causing delete triggers to fire (if that's what you want).
- New OLAP functions - These functions -- RANK, DENSE_RANK, and ROW_NUMBER -- make it easy for a person to code a SELECT statement that will a) sort a query result set, in ascending or descending sequence, according to a user-specified single- or multi-column value (and this sort takes place BEFORE the sort for an ORDER BY clause that might appear at the "end" of the query), and b) assign an integer value to each row in this sorted result set, starting with 1 and counting up (as Chris put it, one can think of the OLAP specifications "as counting functions that count in different ways." The difference referred to in Chris's statement has to do with how "ties" (referring to the OLAP specification sort key value) are handled: ROW_NUMBER assigns an integer value of n to the nth row in the sorted result set, regardless of whether or not the sort key value in that row is the same as that of the previous row in the sorted set (in other words, "tied" rows get different ROW_NUMBER values, so if the result set is sorted in descending SALES order, and if rows four and five in the set have the same SALES value, the rows will get ROW_NUMBER values of 4 an 5, respectively). RANK and DENSE_RANK assign equal integer values to "tied" rows, but differ in how a row after a set of "tied" rows is numbered: RANK will skip integer values so that next row after a set of "tied" rows will be assigned a RANK value of n if it is the nth row in the result set. DENSE_RANK, on the other hand, will assign a rank of n+1 to the first row after a set of "tied" rows if those "tied" rows were assigned a rank of n. So, if rows three, four, and five in a set sorted by SALES have the same SALES value, they will each get a rank value of 3, whether the OLAP specification is RANK or DENSE_RANK. The next row in the set (row six) will get a rank value of 6 if the RANK specification is used, and a rank value of 4 if DENSE_RANK is specified (i.e., there are no "gaps" in rank values used if DENSE_RANK is specified).
In the second part of his presentation, Chris described SQL enhancements delivered in DB2 10 for z/OS, including the following:
- Richer OLAP specifications: moving aggregates (e.g., moving sum and moving average) - These specifications enable one to code a SELECT that will partition a result set by a user-specified single- or multi-column value, sort within result set partitions by a user-specified single- or multi-column value, and then aggregate column values within partitions in a "moving window" fashion. For example, SALES values in rows in a partition might be averaged in this way: take a row's SALES value and average it with the SALES values of the two preceding rows in the partition. Row 1 in a partition has no preceding rows, so its "moving average" value would be the SALES value in the row. The "moving average" value for row 2 in a partition would be the average of the row 2 and row 1 SALES values (there is only one preceding row in that case). The "moving average" values for rows 3 through n in a partition will be the average of a row's SALES value and the SALES values of the two preceding rows in the partition. You could also generate other types of "moving" aggregates, such as moving counts and moving sums.
- TIMESTAMP enhancements - A DB2 10 timestamp value can have up to 12 digits of precision for the fractional second part of the value (that gets you to the picosecond level -- trillionths of a second). Additionally, the precision of the fractional second part of a timestamp value can be user-specified, with the range of the user-specified precision being 0 to 12 digits (6 digits is the default).
- TIMESTAMP WITH TIME ZONE - This new optional specification for the TIMESTAMP data type allows a time zone value to be stored with a timestamp value (the time zone value is an offset from UTC, formerly known as Greenwich Mean Time). To go along with the new WITH TIME ZONE option for the TIMESTAMP data type, there are new special registers (e.g., SESSION TIME ZONE), and the built-in date/time functions (e.g., EXTRACT) have been extended to support TIME ZONE values.
- Implicit data type casting - In a DB2 10 system, you can concatenate a character string value with a numeric value without having to first explicitly cast the numeric value as a character string -- that casting will be done for you by DB2. Similarly, DB2 10 will let you provide the character string representation of a numeric value as the argument for a built-in function such as FLOOR -- that character string will be implicitly cast as a numeric value by DB2.
- Extended indicator variables - Use of extended indicator variables is limited to host variables used to input data to DB2 (i.e., they can be used with host variables specified in INSERT, UPDATE, and MERGE statements); furthermore, extended input variables have to be enabled via the package bind option EXTENDEDINDICATOR(YES) or via the WITH EXTENDED INDICATORS option of PREPARE. An extended indicator value of -5 means that the target column for the associated host variable is to be set to its default value. An extended indicator value of -7 means that the target column for the associated host variable is to be treated as if it had not been specified in the statement.
- Row permissions and column masks - Defined via CREATE PERMISSION and CREATE MASK, respectively, these are essentially table-level rules that control access to data rows and columns. Row permissions and column masks are better than views when it comes to establishing security as it pertains to row and column access, because they become part of the associated table's definition.
- Cursors declared WITH RETURN TO CLIENT - When a cursor is declared in a stored procedure with this new option, the result set of that cursor can be retrieved by the "top-level" program that initiated a set of nested stored procedure calls, even if the stored procedure in which the cursor is declared is several levels down from the top-level program (WITH RETURN TO CALLER, formerly the only form of WITH RETURN for a cursor declaration, allows a result set to be retrieved by a program that is "one level up" from the stored procedure in which the cursor is declared, referring to levels of nested stored procedure calls). In talking about the WITH RETURN TO CLIENT option of DECLARE CURSOR, Chris noted that DB2 10 supports nested stored procedure calls up to 64 levels deep (versus a limit of 16 levels in prior releases of DB2).
Optimizing DB2 Connect deployments - This session was delivered by Brent Gross, a member of the DB2 for Linux, UNIX, and Windows development team at IBM's Toronto Lab. Brent noted, among other things, that with respect to DB2 clients, you want to go with "the lightest, smallest package possible." He recommends the Data Server Driver Package (aka the DS Driver) in most cases.
Brent also mentioned that a direct DB2 client to DB2 for z/OS server connection is preferred, in most cases, to a set-up that has DB2 clients accessing DB2 for z/OS through a DB2 Connect gateway. The main benefits of direct-to-DB2 connections are 1) simplified network topology and 2) improved performance. Regarding that second point, Brent said that elapsed time improvements of 15-30% have been observed when direct connection of clients to DB2 for z/OS has replaced a set-up that had client-to-DB2 connections going through a DB2 Connect gateway (OLTP workloads see the greatest performance gains).
Brent pointed out that there are some situations in which client connections to a DB2 for z/OS server have to go through a DB2 Connect gateway:
- When a client-side transaction manager is using two-phase commit implemented through a "dual transport" model (Tuxedo and Encina are two transaction managers that implement two-phase commit in this way).
- If homogeneous federation is to be used (this refers to the ability to create nicknames across DB2 and Informix servers).
- When DB2 Connect Enterprise Edition is licensed with the "concurrent users" option.
Also in support of the general recommendation that DB2 clients connect directly to a DB2 for z/OS server, Brent pointed out that this approach carries with it no penalties with regard to functionality versus the DB2 Connect gateway alternative (for example, sysplex workload balancing, available for years for Java applications directly connected to DB2, was made available for .NET and ODBC and CLI clients with version 9.5 Fix Pack 3 of the DB2 client code).
For organizations looking to migrate from a DB2 Connect gateway to a direct client-to-DB2 set-up, Brent recommended starting with application servers (versus "fat client" workstations).
A few other items of information provided by Brent during the session:
- DB2 10 for z/OS enables maximum connection, thread, and timeout values to be specified at the application server level via server profiles.
- Full support of DB2 10 functionality for DRDA requesters requires the use of Version 9.7 Fix Pack 3a (or above) of DB2 client code.
- For maximum availability in a DB2 for z/OS data sharing environment, dynamic virtual IP addresses (DVIPAs) should be assigned to DB2 member subsystems, and clients should connect to the data sharing group using the distributed DVIPA that is assigned to the Sysplex Distributor component of z/OS.
Tuesday, October 25, 2011
IOD Dispatch (1) - Of IDAA and DB2 DRDA
Greetings from Las Vegas, Nevada, site of the 2011 IBM Information on Demand conference (aka IOD). This is the first of several entries I'll post this week to my blog, with highlights from some of the DB2 sessions I've attended. Today I want to share with you good stuff that I picked up from two excellent presentations: one, delivered yesterday to a group of IBMers, on the new IBM DB2 Analytics Accelerator, and one about DRDA best practices in a DB2 for z/OS environment.
The IBM DB2 Analytics Accelerator: wow-level performance for complex, data-intensive business intelligence queries. Boy, did this session ever grab my attention. Big Blue recently announced a new product, the IBM DB2 Analytics Accelerator (IDAA), that is already delivering, at beta sites, eye-popping response times for some really hairy data warehouse queries. The IDAA is, in essence, a deep integration of DB2 for z/OS and IBM's Netezza data analytics server. Here's how it works: a user or application directs a report-generating or decision support query to a DB2 for z/OS system. DB2 then does what it always does -- it optimizes the query; BUT -- and this is the really cool part -- the optimizer has, in the form of the IDAA, a new access path that it can select (if the IDAA is added to an existing DB2 system, the extended optimizer functionality is added by way of a DB2 PTF). Here's the deal: some or all of the tables in the DB2-managed data warehouse are copied over to the IDAA, and if a query targets one (or more) of those tables, the DB2 optimizer can analyze the query and send it to the IDAA for processing if it determines that this would be the best-performing option for result set generation; otherwise, DB2 will execute the query locally (the IDAA is attached to the DB2 for z/OS system via a 10 gigabit Ethernet connection).
Here's why I like this solution a lot:
And about performance: I've mentioned several times that the IDAA can deliver huge improvements in response time for queries that are highly complex and/or which require very large-scale data scans when executed. Words are fine, you may be thinking. How about some numbers? Well, we heard from an IT manager with an organization (a big insurance company) that was a beta test site for the IDAA. To quote this gentleman's understated assessment, test results "were impressive." How impressive? How about 4566 seconds for execution of a certain query before IDAA implementation, and 3.7 seconds after. How about 9558 seconds for another query before, and 5 seconds after. We're talking orders of magnitude improvement in elapsed time. More from the beta testing organization:
DRDA best practices for DB2 for z/OS -- This session was delivered by Christian Molaro, a Belgium-based DB2 professional whose expertise has been recognized by Big Blue: Christian is an IBM Gold Consultant and an IBM Champion for Information Management. In his presentation, Christian made a number of very sound recommendations related to the processing of client-server workloads in DB2 a for z/OS environment (DRDA, or Distributed Relational Database Architecture, is the protocol used by DB2 servers and requesters for distributed database access). Among these recommendations were the following:
The IBM DB2 Analytics Accelerator: wow-level performance for complex, data-intensive business intelligence queries. Boy, did this session ever grab my attention. Big Blue recently announced a new product, the IBM DB2 Analytics Accelerator (IDAA), that is already delivering, at beta sites, eye-popping response times for some really hairy data warehouse queries. The IDAA is, in essence, a deep integration of DB2 for z/OS and IBM's Netezza data analytics server. Here's how it works: a user or application directs a report-generating or decision support query to a DB2 for z/OS system. DB2 then does what it always does -- it optimizes the query; BUT -- and this is the really cool part -- the optimizer has, in the form of the IDAA, a new access path that it can select (if the IDAA is added to an existing DB2 system, the extended optimizer functionality is added by way of a DB2 PTF). Here's the deal: some or all of the tables in the DB2-managed data warehouse are copied over to the IDAA, and if a query targets one (or more) of those tables, the DB2 optimizer can analyze the query and send it to the IDAA for processing if it determines that this would be the best-performing option for result set generation; otherwise, DB2 will execute the query locally (the IDAA is attached to the DB2 for z/OS system via a 10 gigabit Ethernet connection).
Here's why I like this solution a lot:
- The performance results can be pretty astounding (more on this in a moment).
- In an environment in which a DB2 for z/OS database has been serving business intelligence query requests, the IDAA is invisible to end users and applications -- they continue to direct queries to the DB2 system. DB2 makes the query routing decision, so users and application programmers don't have to.
- The database is updated, backed up, and secured on the DB2 end of things (data on the IDAA can ONLY be accessed via the attached DB2 system). Those are things that seasoned DB2 for z/OS DBAs do very well, and full advantage is taken of the industry-leading security and availability benefits delivered by the mainframe DB2 platform.
- It's a best-of-both-worlds solution for modern data warehouse applications that are characterized by high-volume OLTP-like queries (served efficiently by the DB2 system) AND complex, data-intensive SQL requests (processed very effectively by the IDAA).
- The time from "un-boxing" of the IDAA to ready-for-use is very short. No database schema changes are required (the IDAA can deliver outstanding complex query performance for traditional relational database designs and for star schemas), and IDAA set-up is accomplished via an intuitive graphical interface.
And about performance: I've mentioned several times that the IDAA can deliver huge improvements in response time for queries that are highly complex and/or which require very large-scale data scans when executed. Words are fine, you may be thinking. How about some numbers? Well, we heard from an IT manager with an organization (a big insurance company) that was a beta test site for the IDAA. To quote this gentleman's understated assessment, test results "were impressive." How impressive? How about 4566 seconds for execution of a certain query before IDAA implementation, and 3.7 seconds after. How about 9558 seconds for another query before, and 5 seconds after. We're talking orders of magnitude improvement in elapsed time. More from the beta testing organization:
- The data warehouse used for IDAA testing is big, with one table having more than a billion rows and some 200 tables having more than 500 million rows apiece.
- High throughput was seen for IDAA data load operations, with 590 million rows loaded in 9.5 minutes, and 1.5 billion rows loaded in 15 minutes.
- The IDAA was attached to a DB2 for z/OS data sharing group (attachment to a standalone DB2 subsystem is, of course, supported as well).
DRDA best practices for DB2 for z/OS -- This session was delivered by Christian Molaro, a Belgium-based DB2 professional whose expertise has been recognized by Big Blue: Christian is an IBM Gold Consultant and an IBM Champion for Information Management. In his presentation, Christian made a number of very sound recommendations related to the processing of client-server workloads in DB2 a for z/OS environment (DRDA, or Distributed Relational Database Architecture, is the protocol used by DB2 servers and requesters for distributed database access). Among these recommendations were the following:
- Keep your DB2 client code current. In working with his clients, Christian has encountered several situations in which DB2 distributed database access problems were traced to down-level DB2 client code. He recommended that client applications utilize -- at least -- the version 9.5, fix pack 3 level of DB2 client code, and pointed out that the 9.7, fix pack 3a level of client code is required for full support of DB2 for z/OS Version 10 functionality.
- Choose the right client package. IBM provides a number of DB2 drivers and client packages (see http://www-01.ibm.com/support/docview.wss?uid=swg21385217). Christian recommended going with the package that has the smallest footprint while delivering the functionality required by the client application.
- Using an IBM driver to go straight from the client application server to DB2 (versus going through a DB2 Connect gateway server) is usually the right choice -- but not always. The no-gateway approach generally delivers optimal performance and simplifies the application infrastructure. On the other hand, using a DB2 Connect gateway server can be a good approach if the environment is characterized by a large number of "fat" clients, as opposed to a relatively small number of application servers that might front numerous upstream end user workstations.
- Speaking of DB2 Connect, this can be a very good initial workload for hosting on a Linux image on a System z server. A performance advantage can be realized through the use of HiperSockets for communication with a DB2 subsystem in a z/OS LPAR on the same mainframe server. On top of that, when folks see that Linux on System z provides a great environment for a DB2 Connect gateway, they might be inclined to follow up with the implementation of an application server (such as WebSphere) under Linux on z.
- DB2 10 for z/OS does not support private protocol, so make sure that your programs aren't using it before you migrate to DB2 10. APAR PK64045 provides useful information in this area.
- For most efficient use of mainframe memory and cycles, ensure that DB2 support for inactive threads is enabled. This is done via the CMTSTAT parameter of DSNZPARM.
- Location aliases are a great way to restrict execution of certain client-server workloads to a subset of the members of a DB2 data sharing group. DB2 10 delivered a nice enhancement here, enabling dynamic addition, deletion, and modification of location aliases via the -MODIFY DDF command (with prior releases of DB2, location alias changes required execution of the change log inventory utility (aka DSNJU003), and that utility can can be executed when DB2 is down.
- High-performance DBATs, a DB2 10 enhancement enabled through binding of DRDA-using packages with RELEASE(DEALLOCATE), can deliver significant improvements in CPU efficiency for client-server workloads. I blogged on this very topic earlier this year.
Friday, October 21, 2011
DB2 for z/OS and SMS: a DB2 10 Migration Step That You Can Take Ahead of Time
[Note: the IBM developerWorks Web site has undergone some significant changes since this blog entry was originally posted in October of 2011. These changes, aimed largely at making the site more community-oriented (there are several DB2-related communities), are welcome, but as a result the path to some file is not the same as before. I have updated the "how to find" information in this blog post accordingly.]
Those of you looking to migrate your DB2 for z/OS Version 8 or Version 9 systems to DB2 10 may be aware of the fact that the DB2 catalog and directory data sets must be SMS-managed in a DB2 10 environment, and that SMS must be set up to manage these data sets before you actually perform the migration process (SMS, of course, refers to the Storage Management Subsystem component of the z/OS operating system).
At some sites, the DB2 catalog and directory data sets are already SMS-managed (DB2 9 provided a USE SMS option on panel DSNTIPA2 of the DB2 install CLIST, which if set to YES would cause DB2 catalog and directory data sets to be allocated on SMS-managed storage). For folks with DB2 catalog and directory data sets that are not SMS-managed, IBM provides a job, called DSNTIJSS, that can be used to create a stand-alone SMS environment for the DB2 catalog and directory (that's for sites that do not currently have an SMS environment -- DSNTIJSS can also be used as a guide for adapting an existing SMS environment to achieve SMS management of the DB2 catalog and directory data sets). DSNTIJSS ships with DB2 10 -- it's a member of the SDSNSAMP library.
Suppose you want to use DSNTIJSS to help you get set up for SMS management of the DB2 catalog and directory data sets, but your organization has not yet ordered DB2 10? The good news for you is that this job is available on the IBM developerWorks Web site. developerWorks, by the way, is a good site for you to get to know anyway. There is a lot of great stuff there, and contrary to what you might assume because of the site's name, the content is NOT limited to items that would only be of interest to developers -- DB2 DBAs and systems people will find many useful papers, files, etc.
Information on getting the DSNTIJSS job from developerWorks follows. First I'll provide "find it" instructions starting from the developerWorks home page -- this because it's my way of encouraging you to look around at what the site has to offer. I'll follow that with more of a short-cut path to the file.
The scenic route:
The more direct route: go to the developerWorks "public files" page (https://www.ibm.com/developerworks/community/files/app#/), and enter DSNTIJSS in the search box in the gray banner across the top of the page (again, not the search box in the black banner above the gray banner). That will return the dsntijss.copy link. Click on that and download the file.
That's it! Note that dsntijss.copy is a text file that can be opened using an editor such as Notepad.
A few more items of information for you:
DB2 10 has a whole lot to offer to organizations now using DB2 Version 8 or Version 9. The sooner you get to DB2 10, the sooner your company can benefit from numerous functionality, performance, and high-availability enhancements. I hope that the information in this blog entry will provide a boost to your DB2 10 migration efforts.
Those of you looking to migrate your DB2 for z/OS Version 8 or Version 9 systems to DB2 10 may be aware of the fact that the DB2 catalog and directory data sets must be SMS-managed in a DB2 10 environment, and that SMS must be set up to manage these data sets before you actually perform the migration process (SMS, of course, refers to the Storage Management Subsystem component of the z/OS operating system).
At some sites, the DB2 catalog and directory data sets are already SMS-managed (DB2 9 provided a USE SMS option on panel DSNTIPA2 of the DB2 install CLIST, which if set to YES would cause DB2 catalog and directory data sets to be allocated on SMS-managed storage). For folks with DB2 catalog and directory data sets that are not SMS-managed, IBM provides a job, called DSNTIJSS, that can be used to create a stand-alone SMS environment for the DB2 catalog and directory (that's for sites that do not currently have an SMS environment -- DSNTIJSS can also be used as a guide for adapting an existing SMS environment to achieve SMS management of the DB2 catalog and directory data sets). DSNTIJSS ships with DB2 10 -- it's a member of the SDSNSAMP library.
Suppose you want to use DSNTIJSS to help you get set up for SMS management of the DB2 catalog and directory data sets, but your organization has not yet ordered DB2 10? The good news for you is that this job is available on the IBM developerWorks Web site. developerWorks, by the way, is a good site for you to get to know anyway. There is a lot of great stuff there, and contrary to what you might assume because of the site's name, the content is NOT limited to items that would only be of interest to developers -- DB2 DBAs and systems people will find many useful papers, files, etc.
Information on getting the DSNTIJSS job from developerWorks follows. First I'll provide "find it" instructions starting from the developerWorks home page -- this because it's my way of encouraging you to look around at what the site has to offer. I'll follow that with more of a short-cut path to the file.
The scenic route:
- Go to the developerWorks home page (http://www.ibm.com/developerworks/)
- On that page, click on "Community" in the black banner across the top of the page. A gray box will drop down. Click on the "Communities" link (that's "Communities" plural, not the bold-faced "Community" link).
- After clicking the "Communities" link, you'll see a gray banner near the top of the page, just under the developerWorks black banner. In the search box in the gray "Communities" banner (NOT the search box in the black "developerWorks" banner), you'll see a search scope indicator (it might be pre-filled with "Public Communities"). Click on that part of the search box to get a drop-down menu, and select "developerWorks Community" from that menu.
- After selecting "developerWorks Community" as the search scope, enter DSNTIJSS as the search term next to the magnifying glass icon, and hit Enter. [At this point you'll get a sign-in page, if you're not already signed into developerWorks. To sign in you need an IBM ID. Getting one of these is free and quick. Sign in with your IBM ID.]
- Click on the returned dsntijss.copy link, and you're there.
The more direct route: go to the developerWorks "public files" page (https://www.ibm.com/developerworks/community/files/app#/), and enter DSNTIJSS in the search box in the gray banner across the top of the page (again, not the search box in the black banner above the gray banner). That will return the dsntijss.copy link. Click on that and download the file.
That's it! Note that dsntijss.copy is a text file that can be opened using an editor such as Notepad.
A few more items of information for you:
- A good source for more information about DSNTIJSS is the DB2 10 Installation and Migration Guide. You can get a copy of this manual in PDF form by going to the DB2 10 for z/OS manuals page on the Web (if you'd rather access this manual's content in the DB2 10 Information Center, just click on the HTML link next to the manual's title in the list of manuals).
- There is also useful information about DSNTIJSS in the IBM "red book" titled "DB2 10 for z/OS Technical Overview" (see section 12.9, "SMS-managed DB2 catalog").
- Although getting set up for SMS management of the DB2 catalog and directory data sets is required prior to migrating a subsystem to DB2 10, it is not necessary to convert existing non-SMS-managed catalog and directory data sets to the SMS environment -- any such data sets not changed to SMS management during the DB2 10 migration process will become SMS-managed the next time the associated table space is reorganized.
DB2 10 has a whole lot to offer to organizations now using DB2 Version 8 or Version 9. The sooner you get to DB2 10, the sooner your company can benefit from numerous functionality, performance, and high-availability enhancements. I hope that the information in this blog entry will provide a boost to your DB2 10 migration efforts.
Tuesday, October 11, 2011
Even a "One-Box" DB2 Data Sharing Group Can Boost Availability
Back in the mid-1990s, when organizations were first implementing DB2 for z/OS data sharing groups on Parallel Sysplex mainframe clusters, there were ALWAYS multiple server "boxes" involved. For one thing, the only type of coupling facility (CF) available was the external type -- a box of its own. And, you'd have two of those, to allow for things like lock structure rebuild in CF2 in case of the failure of, or loss of connectivity to, CF1. So, that's two boxes already, and we haven't gotten to the mainframe servers yet. You'd almost certainly have at least two of these, as well, because the mainframes that IBM was selling at the time -- the first generation built with CMOS microprocessors versus bipolar chip sets -- maxed out at 50 MIPS of processing power (5 MIPS per engine -- I'm not kidding -- with up to 10 engines per box). Parallel Sysplexes with four or five or more boxes -- coupling facilities and mainframe servers -- were the norm.
Not too many years later, the number of boxes in Parallel Sysplex / DB2 data sharing configurations started going down. Two key factors were at play here. First, IBM came out with internal coupling facilities (ICFs) -- basically allowing an organization to dedicate one or more mainframe engines and some mainframe memory to one or more LPARs that would run Coupling Facility Control Code instead of z/OS. That reduced Parallel Sysplex server "footprints," and saved companies some money, to boot (internal CFs are less expensive than their external cousins). The other development that had a reducing effect on the number of boxes found in the typical Parallel Sysplex was the introduction of ever-more-powerful mainframe servers -- machines featuring more engines and more MIPS per processor. Organizations found that they no longer needed five or eight or ten System z servers to get whopping huge amounts of processing capacity. The current flagship of the System z line, the z196, can have up to 80 general purpose processors with an aggregate capacity of more than 52,000 MIPS in ONE footprint (up to 16 additional processors can be configured as specialty engines, examples of which include internal coupling facility engines and zIIPs). Lash just a couple of these bad boys together in a 'Plex, and you've got one very highly scalable computing platform.
With all this about internal coupling facilities and super-powerful mainframes said, plenty of organizations want a minimum of three boxes in a Parallel Sysplex / DB2 data sharing configuration. A three-box set-up does indeed provide the ultimate in high availability, as it enables you to avoid a scenario in which the failure of one box results in the simultaneous loss of 1) a DB2 data sharing member and 2) the coupling facility lock structure and/or the coupling facility shared communications area (that particular simultaneous loss would cause a data sharing group failure which would then require a group restart). This scenario, sometimes called the "double failure" scenario, can also be avoided in a two-box Parallel Sysplex if the coupling facility lock structure and the coupling facility shared communications area are duplexed. I wrote a good bit about the "double failure" scenario and the pros and cons of lock structure and SCA duplexing in an entry posted last year to the blog I maintained while working as an independent DB2 consultant.
Ultimate DB2 data sharing availability, then, is delivered by a multi-box Parallel Sysplex. Now, suppose your organization runs DB2 for z/OS on one System z server, and suppose that single-box situation is unlikely to change anytime soon. Could your company still realize value from the implementation of a DB2 data sharing group, even if that group were to run on a one-box Parallel Sysplex? YES. Here's why: even on a one-box Sysplex, DB2 data sharing can deliver a very important availability benefit: the ability to apply DB2 maintenance, and even to migrate to a new release of DB2 (as I pointed out in a previous post to this blog), without the need for a maintenance window (you apply fixes to the load library of a member of the DB2 data sharing group, quiesce application traffic to that member, stop and start the member to activate the maintenance, resume the flow of application requests to the member, then do the same for the other member -- or members -- of the group). To put it another way: even when the Parallel Sysplex infrastructure (at least two z/OS LPARs, and at least two internal coupling facilities) is configured on one mainframe, implementation of a DB2 data sharing group will enable you to apply DB2 maintenance (and to perform DB2 version upgrades) without having to stop the application workload. At a time when unplanned DB2 outages are more and more rare (thanks to ever-more-reliable hardware and software), the opportunity to virtually eliminate planned outages can be a very big deal for an organization.
And even though the frequency of unplanned DB2 outages is diminishing, in an increasingly online world the cost of unexpected downtime is higher than ever (i.e., tolerance for unplanned outages is lower than ever). A DB2 data sharing group running on a one-box Parallel Sysplex can greatly reduce the scope of an abnormal DB2 subsystem termination: if such a failure occurs in a data sharing system, only the data and index pages (or rows, if row-level locking is used) changed by units of work that were in-flight on the failing DB2 subsystem become unavailable until the failed subsystem is restarted (this as opposed to the whole database becoming inaccessible if a standalone DB2 subsystem terminates abnormally). On top of that, DB2 restart processing tends to complete more quickly in a data sharing versus a standalone DB2 environment (externalization of changed pages to group buffer pools at commit means that the roll-forward part of restart requires less time).
Of course, a data sharing group running on a one-box Parallel Sysplex can't provide data access if the one mainframe server fails. Maintaining application availability in the event of such a failure (or of a planned server outage) would require a multi-box Sysplex. The point I want to make is this: you can get a lot -- though not all -- of the availability benefits of DB2 data sharing even if your Parallel Sysplex is contained within one System z server (consider: mainframes are extremely reliable). There are organizations out there right now that have boosted uptime for DB2-accessing applications by implementing data sharing groups on single-box Parallel Sysplexes. Want ultimate data availability? Do data sharing on a multi-box Sysplex. If the multi-box route is not an option for your company, don't assume that DB2 data sharing can't work for you. It can.
Not too many years later, the number of boxes in Parallel Sysplex / DB2 data sharing configurations started going down. Two key factors were at play here. First, IBM came out with internal coupling facilities (ICFs) -- basically allowing an organization to dedicate one or more mainframe engines and some mainframe memory to one or more LPARs that would run Coupling Facility Control Code instead of z/OS. That reduced Parallel Sysplex server "footprints," and saved companies some money, to boot (internal CFs are less expensive than their external cousins). The other development that had a reducing effect on the number of boxes found in the typical Parallel Sysplex was the introduction of ever-more-powerful mainframe servers -- machines featuring more engines and more MIPS per processor. Organizations found that they no longer needed five or eight or ten System z servers to get whopping huge amounts of processing capacity. The current flagship of the System z line, the z196, can have up to 80 general purpose processors with an aggregate capacity of more than 52,000 MIPS in ONE footprint (up to 16 additional processors can be configured as specialty engines, examples of which include internal coupling facility engines and zIIPs). Lash just a couple of these bad boys together in a 'Plex, and you've got one very highly scalable computing platform.
With all this about internal coupling facilities and super-powerful mainframes said, plenty of organizations want a minimum of three boxes in a Parallel Sysplex / DB2 data sharing configuration. A three-box set-up does indeed provide the ultimate in high availability, as it enables you to avoid a scenario in which the failure of one box results in the simultaneous loss of 1) a DB2 data sharing member and 2) the coupling facility lock structure and/or the coupling facility shared communications area (that particular simultaneous loss would cause a data sharing group failure which would then require a group restart). This scenario, sometimes called the "double failure" scenario, can also be avoided in a two-box Parallel Sysplex if the coupling facility lock structure and the coupling facility shared communications area are duplexed. I wrote a good bit about the "double failure" scenario and the pros and cons of lock structure and SCA duplexing in an entry posted last year to the blog I maintained while working as an independent DB2 consultant.
Ultimate DB2 data sharing availability, then, is delivered by a multi-box Parallel Sysplex. Now, suppose your organization runs DB2 for z/OS on one System z server, and suppose that single-box situation is unlikely to change anytime soon. Could your company still realize value from the implementation of a DB2 data sharing group, even if that group were to run on a one-box Parallel Sysplex? YES. Here's why: even on a one-box Sysplex, DB2 data sharing can deliver a very important availability benefit: the ability to apply DB2 maintenance, and even to migrate to a new release of DB2 (as I pointed out in a previous post to this blog), without the need for a maintenance window (you apply fixes to the load library of a member of the DB2 data sharing group, quiesce application traffic to that member, stop and start the member to activate the maintenance, resume the flow of application requests to the member, then do the same for the other member -- or members -- of the group). To put it another way: even when the Parallel Sysplex infrastructure (at least two z/OS LPARs, and at least two internal coupling facilities) is configured on one mainframe, implementation of a DB2 data sharing group will enable you to apply DB2 maintenance (and to perform DB2 version upgrades) without having to stop the application workload. At a time when unplanned DB2 outages are more and more rare (thanks to ever-more-reliable hardware and software), the opportunity to virtually eliminate planned outages can be a very big deal for an organization.
And even though the frequency of unplanned DB2 outages is diminishing, in an increasingly online world the cost of unexpected downtime is higher than ever (i.e., tolerance for unplanned outages is lower than ever). A DB2 data sharing group running on a one-box Parallel Sysplex can greatly reduce the scope of an abnormal DB2 subsystem termination: if such a failure occurs in a data sharing system, only the data and index pages (or rows, if row-level locking is used) changed by units of work that were in-flight on the failing DB2 subsystem become unavailable until the failed subsystem is restarted (this as opposed to the whole database becoming inaccessible if a standalone DB2 subsystem terminates abnormally). On top of that, DB2 restart processing tends to complete more quickly in a data sharing versus a standalone DB2 environment (externalization of changed pages to group buffer pools at commit means that the roll-forward part of restart requires less time).
Of course, a data sharing group running on a one-box Parallel Sysplex can't provide data access if the one mainframe server fails. Maintaining application availability in the event of such a failure (or of a planned server outage) would require a multi-box Sysplex. The point I want to make is this: you can get a lot -- though not all -- of the availability benefits of DB2 data sharing even if your Parallel Sysplex is contained within one System z server (consider: mainframes are extremely reliable). There are organizations out there right now that have boosted uptime for DB2-accessing applications by implementing data sharing groups on single-box Parallel Sysplexes. Want ultimate data availability? Do data sharing on a multi-box Sysplex. If the multi-box route is not an option for your company, don't assume that DB2 data sharing can't work for you. It can.
Monday, September 26, 2011
How big can a DB2 for z/OS index be?
That looks like a pretty simple question, doesn't it? I mean, if I were asking about maximum table space size, you'd be able to answer, really quickly, with "64 GB for a non-partitioned table space, 128 TB for a non-LOB partitioned table space," right? Asking about maximum size for an index doesn't make the question that much harder, does it? Or does it? Do you know the answer? Read on, and you will.
Just last week, I got a question from a DBA at one of the DB2-using organizations that I support. She had a non-partitioned index (NPI) defined on a table in a big partitioned table space. The index had grown to 63.9 GB, and that gave the DBA a nagging sense of unease. She was concerned that something bad might happen when the index size hit 64 GB. For lots of mainframe DB2 people, 64 GB does ring a bell as a size limit for a non-partitioned object. Does that limit apply to a non-partitioned index?
Looking for the answer to that question, you could check the "Limits in DB2 for z/OS" section of the appendix in the SQL Reference (this manual and others in the DB2 for z/OS library are available online at https://www-304.ibm.com/support/docview.wss?uid=swg27019288). There, in the "DB2 system limits" table, you'd see how large a table space can be, but you wouldn't find information on index size limits. Hmmmm....
Time to end the suspense. The aforementioned DBA can relax (and I told her so), because the non-partitioned index in question had been defined with a piece size of 64 GB, and a non-partitioned index can have, of course, more than one piece (i.e., more than one data set). In fact, any index (partitioned or non-partitioned) can have more than one data set in its page set (the data sets that comprise a DB2 table space or index space are collectively known as a page set). So, the maximum size of an index would depend on two factors: 1) the maximum number of data sets that can be in the index's page set, and 2) the maximum size of a data set in the index's page set.
Both of those index-max-size factors have some dependency on the nature of the table space holding the table on which an index is defined. If that table space is non-partitioned (i.e., a simple table space or a segmented table space that is not a universal table space), the index can have up to 32 data sets in its page set, and each of those data sets can be up to 2 GB in size -- so, in that case the maximum size of the index would be 64 GB (32 data sets times 2 GB per data set), UNLESS the index was defined with a PIECESIZE value smaller than 2G (if PIECESIZE 1G had been specified for the index, its maximum size would be 32 GB). Note that the default value for PIECESIZE is 2G if the index is defined on a table in a non-partitioned table space.
When the table on which an index is defined is in a partitioned table space, the index's maximum size depends on 1) whether or not that table space was defined with a DSSIZE specification (or with the LARGE attribute), and 2) whether the index is partitioned or non-partitioned (syntactically speaking, DSSIZE is preferred over LARGE for specifying partition size). We'll consider non-partitioned indexes first. If LARGE or DSSIZE was not specified for the associated table space, a non-partitioned index can have 32 data sets in its page set, and each of these data sets can be up to 2 GB in size. That makes the index's maximum size 64 GB (as noted above for an index associated with a non-partitioned table space, an explicitly specified PIECESIZE value smaller than 2G would reduce the index's maximum size accordingly).
If the partitioned table space was defined with the LARGE or DSSIZE options, an associated non-partitioned index can have as many data sets as the table space can have partitions. How many is that? Well, it depends on the DSSIZE value (and a LARGE specification equates to a DSSIZE value of 4G) and the page size of the table space. Details on the relationship between DSSIZE, page size, and maximum number of partitions can be found in the CREATE TABLESPACE section of the SQL Reference, but suffice it to say that the maximum size of a non-partitioned index is 128 TB (same as the maximum size of a non-LOB table space). This maximum size could be reached in one of two ways: with a PIECESIZE of 32G and 4096 data sets, or a PIECESIZE of 64G and 2048 data sets (a 64G PIECESIZE specification is possible only if the DSSIZE of the associated table space is 64G or larger, and with a DSSIZE specification of 64G and a page size of 32 KB, a table space can have a maximum of 2048 partitions). If the non-partitioned index was defined without a PIECESIZE specification, it will have a default PIECESIZE as determined by the following formula:
Default PIECESIZE = MIN(x, 2^32 /(MIN(4096, 2^32 / (x / y))) * z)
Where:
x is the DSSIZE of the associated table space
y is the page size of the table space
z is the page size of the index
For example, if you created a non-partitioned index without a PIECESIZE specification, and the associated table space had a DSSIZE of 32G (the "x" value in the above formula) and a page size of 32 KB (the "y" value in the formula), and the page size of your index is 4 KB (the "z" value), the default PIECESIZE value would be 4G.
How about the max size of a partitioned index? Because that index will have one data set for each of the associated table space's partitions, the answer to the index size limit question will (as for non-partitioned indexes) depend on the maximum number of partitions for the table space and the size of each index partition. Here's how the maximum number of partitions for a table space breaks down:
So, the above maximum-number-of-partitions values indicate how many partitions a partitioned index can have (again, because you'll have one index partition per table space partition), but how large can each index partition be? You can't specify PIECESIZE for a partitioned index, but DB2 determines the index partition size via the formula referenced previously in this blog entry:
Index partition size = MIN(x, 2^32 /(MIN(4096, 2^32 / (x / y))) * z)
Where:
x is the DSSIZE of the associated table space
y is the page size of the table space
z is the page size of the index
To illustrate: if the table space's DSSIZE is 64G ("x" in the formula) and the table space's page size is 32 KB ("y" value) and the index's page size is 32 KB ("z" value), each index partition can be up to 64 GB in size, and the maximum size of the partitioned index is 128 TB (64 GB times 2048 partitions, which is the maximum number of partitions that a table space with a 64G DSSIZE and a 32 KB page size can have). In fact, 128 TB is as big as a partitioned index can get (an index could also reach this size if the underlying table space had a DSSIZE of 32G and a page size of 32 KB).
Can you use the above index partition-sizing formula when the partitioned table space associated with the index was defined without a DSSIZE specification? I believe that you can. For that purpose, note that a LARGE table space effectively has a DSSIZE of 4G. For a partitioned table space defined without the LARGE option and without a DSSIZE specification, the maximum size of a table space partition is:
You should now be able to respond to "How large can this index get to be?" questions. I hope that this information will be useful to you.
Just last week, I got a question from a DBA at one of the DB2-using organizations that I support. She had a non-partitioned index (NPI) defined on a table in a big partitioned table space. The index had grown to 63.9 GB, and that gave the DBA a nagging sense of unease. She was concerned that something bad might happen when the index size hit 64 GB. For lots of mainframe DB2 people, 64 GB does ring a bell as a size limit for a non-partitioned object. Does that limit apply to a non-partitioned index?
Looking for the answer to that question, you could check the "Limits in DB2 for z/OS" section of the appendix in the SQL Reference (this manual and others in the DB2 for z/OS library are available online at https://www-304.ibm.com/support/docview.wss?uid=swg27019288). There, in the "DB2 system limits" table, you'd see how large a table space can be, but you wouldn't find information on index size limits. Hmmmm....
Time to end the suspense. The aforementioned DBA can relax (and I told her so), because the non-partitioned index in question had been defined with a piece size of 64 GB, and a non-partitioned index can have, of course, more than one piece (i.e., more than one data set). In fact, any index (partitioned or non-partitioned) can have more than one data set in its page set (the data sets that comprise a DB2 table space or index space are collectively known as a page set). So, the maximum size of an index would depend on two factors: 1) the maximum number of data sets that can be in the index's page set, and 2) the maximum size of a data set in the index's page set.
Both of those index-max-size factors have some dependency on the nature of the table space holding the table on which an index is defined. If that table space is non-partitioned (i.e., a simple table space or a segmented table space that is not a universal table space), the index can have up to 32 data sets in its page set, and each of those data sets can be up to 2 GB in size -- so, in that case the maximum size of the index would be 64 GB (32 data sets times 2 GB per data set), UNLESS the index was defined with a PIECESIZE value smaller than 2G (if PIECESIZE 1G had been specified for the index, its maximum size would be 32 GB). Note that the default value for PIECESIZE is 2G if the index is defined on a table in a non-partitioned table space.
When the table on which an index is defined is in a partitioned table space, the index's maximum size depends on 1) whether or not that table space was defined with a DSSIZE specification (or with the LARGE attribute), and 2) whether the index is partitioned or non-partitioned (syntactically speaking, DSSIZE is preferred over LARGE for specifying partition size). We'll consider non-partitioned indexes first. If LARGE or DSSIZE was not specified for the associated table space, a non-partitioned index can have 32 data sets in its page set, and each of these data sets can be up to 2 GB in size. That makes the index's maximum size 64 GB (as noted above for an index associated with a non-partitioned table space, an explicitly specified PIECESIZE value smaller than 2G would reduce the index's maximum size accordingly).
If the partitioned table space was defined with the LARGE or DSSIZE options, an associated non-partitioned index can have as many data sets as the table space can have partitions. How many is that? Well, it depends on the DSSIZE value (and a LARGE specification equates to a DSSIZE value of 4G) and the page size of the table space. Details on the relationship between DSSIZE, page size, and maximum number of partitions can be found in the CREATE TABLESPACE section of the SQL Reference, but suffice it to say that the maximum size of a non-partitioned index is 128 TB (same as the maximum size of a non-LOB table space). This maximum size could be reached in one of two ways: with a PIECESIZE of 32G and 4096 data sets, or a PIECESIZE of 64G and 2048 data sets (a 64G PIECESIZE specification is possible only if the DSSIZE of the associated table space is 64G or larger, and with a DSSIZE specification of 64G and a page size of 32 KB, a table space can have a maximum of 2048 partitions). If the non-partitioned index was defined without a PIECESIZE specification, it will have a default PIECESIZE as determined by the following formula:
Default PIECESIZE = MIN(x, 2^32 /(MIN(4096, 2^32 / (x / y))) * z)
Where:
x is the DSSIZE of the associated table space
y is the page size of the table space
z is the page size of the index
For example, if you created a non-partitioned index without a PIECESIZE specification, and the associated table space had a DSSIZE of 32G (the "x" value in the above formula) and a page size of 32 KB (the "y" value in the formula), and the page size of your index is 4 KB (the "z" value), the default PIECESIZE value would be 4G.
How about the max size of a partitioned index? Because that index will have one data set for each of the associated table space's partitions, the answer to the index size limit question will (as for non-partitioned indexes) depend on the maximum number of partitions for the table space and the size of each index partition. Here's how the maximum number of partitions for a table space breaks down:
- If the table space was defined without the LARGE or DSSIZE options, and with a NUMPARTS value of 64 or less, the maximum number of partitions is 64.
- If the table space was defined with the LARGE option, it can have up to 4096 partitions.
- If the table space was defined with a DSSIZE specification, the maximum number of partitions can be up to 4096, depending on the DSSIZE value and the table space's page size (information on maximum number of partitions for various combinations of DSSIZE and table space page size can be found in the CREATE TABLESPACE section of the SQL Reference).
So, the above maximum-number-of-partitions values indicate how many partitions a partitioned index can have (again, because you'll have one index partition per table space partition), but how large can each index partition be? You can't specify PIECESIZE for a partitioned index, but DB2 determines the index partition size via the formula referenced previously in this blog entry:
Index partition size = MIN(x, 2^32 /(MIN(4096, 2^32 / (x / y))) * z)
Where:
x is the DSSIZE of the associated table space
y is the page size of the table space
z is the page size of the index
To illustrate: if the table space's DSSIZE is 64G ("x" in the formula) and the table space's page size is 32 KB ("y" value) and the index's page size is 32 KB ("z" value), each index partition can be up to 64 GB in size, and the maximum size of the partitioned index is 128 TB (64 GB times 2048 partitions, which is the maximum number of partitions that a table space with a 64G DSSIZE and a 32 KB page size can have). In fact, 128 TB is as big as a partitioned index can get (an index could also reach this size if the underlying table space had a DSSIZE of 32G and a page size of 32 KB).
Can you use the above index partition-sizing formula when the partitioned table space associated with the index was defined without a DSSIZE specification? I believe that you can. For that purpose, note that a LARGE table space effectively has a DSSIZE of 4G. For a partitioned table space defined without the LARGE option and without a DSSIZE specification, the maximum size of a table space partition is:
- 4G, when NUMPARTS for the table space is between 1 and 16
- 2G, when NUMPARTS for the table space is between 17 and 32
- 1G, when NUMPARTS for the table space is between 33 and 64
- 4G, when NUMPARTS for the table space is between 65 and 254
You should now be able to respond to "How large can this index get to be?" questions. I hope that this information will be useful to you.
Thursday, September 15, 2011
DB2 for z/OS Version 10: Moving Objects from One Buffer Pool to Another
Over the past 15 years or so, I have written magazine articles, and presented, and blogged on the topic of DB2 for z/OS buffer pools and their impact on application performance (two of my more recent blog entries in this category were posted in September and October of last year). Often, the message boils down to this: bigger buffer pool configuration = good. That said, there are times when enlarging a DB2 buffer pool configuration is not feasible. There may already be some pressure on the memory resource of your z/OS LPAR (as indicated, in my book, by a demand paging rate in excess of 10 per second during busy processing times -- check your z/OS monitor). There could be an organizational impediment to making your buffer pool configuration larger, in that the approval process for such a system change takes so long as to make you reluctant to put in the request. Whatever the reason, when you need to (or want to) make do with the DB2 buffers you have, you can often improve application response time and throughput by moving table spaces and/or indexes from one buffer pool (a pool with a high rate of disk read I/Os per second) to another (the target pool presumably being one with a relatively low rate of read I/Os per second). There are other reasons, as well, for moving a database object to a different buffer pool: you might want to use a buffer pool to manage parallelism for dynamic queries (as mentioned in the blog entry I posted last week), or you might isolate an object in its own pool for monitoring purposes, or you might be changing an object's page size (more on this momentarily). Whatever your situation might be, I want to let you know via this blog entry that some of the rules pertaining to changing the buffer pool assignment for a table space or an index changed with DB2 10 for z/OS. I'll use the rest of this entry to describe these changes.
I'll start by informing you of one thing that hasn't changed: if your DB2 subsystem is running in standalone mode (i.e., it's not a member of a DB2 data sharing group), and the object being moved from one buffer pool to another is a table space, and the buffer pool to which the table space will be assigned has the same page size as the pool to which the table space is currently assigned, and there are no already-existing "pending" definition changes for the table space or any of its associated indexes (pending object definition changes were introduced with DB2 10 in new-function mode -- more on this later) then 1) the ALTER TABLESPACE statement with the BUFFERPOOL specification can be issued at any time, and 2) the change will take place when the table space's data sets are next opened after the ALTER TABLESPACE statement has been issued (a common means of closing and reopening a table space's data sets is to issue a -STOP DATABASE command for the table space, followed by a -START DATABASE command). This behavior is the same as that which you'd see in a DB2 9 environment (or in a DB2 V8 system).
OK, now for the DB2 10-introduced changes.
FOR TABLE SPACES AND INDEXES
More flexibility for DB2 data sharing sites. I like this change a lot. Prior to DB2 10, if one wanted to reassign a table space or an index to a different buffer pool, the target object had to be in the stopped state at the time of the issuance of the ALTER TABLESPACE or ALTER INDEX statement with the BUFFERPOOL specification. If you're a DBA, that's not so good from a convenience standpoint, as stopping a heavily-accessed table space or index may not be do-able in prime time, even if you issue the -STOP DATABASE command with the AT(COMMIT) option to utilize drain locking. You might have to do the -STOP at night and/or during a weekend. Yuck. Enter DB2 10 (in conversion mode, no less!), and that restriction is gone: you can issue the buffer pool reassigning ALTER TABLESPACE or ALTER INDEX statement at any time, even when DB2 is running in data sharing mode -- the target object does NOT have to be in the stopped state. The change will take effect, as before, when the object's data sets are next opened following the issuance of the ALTER statement.
FOR TABLE SPACES
Reassigning a table space to a buffer pool with a different page size. Before DB2 10, the only way to move a table space from a 4K buffer pool to (for example) an 8K pool (that is, to change the table space's page size) was to unload data from the table space, drop it, re-create it with the larger-page buffer pool specified, and re-load the previously unloaded data. Cumbersome. With DB2 10 (in new-function mode), you can effect this change for a universal table space (one that is not an XML table space) or a LOB table space via an ALTER TABLESPACE statement (specifying the buffer pool with the larger or smaller page size), and the change will take effect the next time the table space (in its entirety) is REORGed with SHRLEVEL CHANGE or REFERENCE (this kind of table space definition modification is known as a "pending" change -- something new introduced with DB2 10, and a continuation of the online schema change capabilities introduced with DB2 V8 and expanded with subsequent DB2 releases). By the way, DB2 10 (in new-function mode) also enables conversion of a simple, segmented, or traditional table-controlled range-partitioned table space to a universal table space via ALTER and REORG.
FOR INDEXES
Pending versus immediate definitional changes. For indexes, the DB2 10 change pertaining to buffer pool reassignment has to timing as it relates to the change actually taking effect. If the change is of the pending variety (new with DB2 10 in new-function mode), it will take effect when the index (in its entirety) is next reorganized via REORG INDEX with SHRLEVEL CHANGE (or via REORG TABLESPACE with SHRLEVEL CHANGE or REFERENCE -- and the REORG TABLESPACE route is required if there are also pending definition changes in existence for the table space associated with the index). Here is what would make an ALTER INDEX with a BUFFERPOOL specification a pending change (and ALL of the following conditions would have to be true):
For a pending change, buffer pool reassignment takes place the next time the entire index is reorganized via REORG INDEX with SHRLEVEL CHANGE or REORG TABLESPACE with SHRLEVEL CHANGE or REFERENCE.
If the change is of the immediate variety, and the page size of the "to" buffer pool is the same as that of the "from" buffer pool, reassignment of the index to the pool specified in the ALTER INDEX statement will take place when the index's data sets are next opened after the issuance of ALTER INDEX.
If it's an immediate change and the "to" buffer pool has a page size that is different from that of the "from" buffer pool, the index is placed in REBUILD-pending status and the buffer pool reassignment takes place when the index is rebuilt.
There you have it. I hope that this information will be useful for your buffer pool reassignment changes in a DB2 10 environment (or for DB2 10 migration planning, if you're currently using DB2 9 or DB2 V8).
I'll start by informing you of one thing that hasn't changed: if your DB2 subsystem is running in standalone mode (i.e., it's not a member of a DB2 data sharing group), and the object being moved from one buffer pool to another is a table space, and the buffer pool to which the table space will be assigned has the same page size as the pool to which the table space is currently assigned, and there are no already-existing "pending" definition changes for the table space or any of its associated indexes (pending object definition changes were introduced with DB2 10 in new-function mode -- more on this later) then 1) the ALTER TABLESPACE statement with the BUFFERPOOL specification can be issued at any time, and 2) the change will take place when the table space's data sets are next opened after the ALTER TABLESPACE statement has been issued (a common means of closing and reopening a table space's data sets is to issue a -STOP DATABASE command for the table space, followed by a -START DATABASE command). This behavior is the same as that which you'd see in a DB2 9 environment (or in a DB2 V8 system).
OK, now for the DB2 10-introduced changes.
FOR TABLE SPACES AND INDEXES
More flexibility for DB2 data sharing sites. I like this change a lot. Prior to DB2 10, if one wanted to reassign a table space or an index to a different buffer pool, the target object had to be in the stopped state at the time of the issuance of the ALTER TABLESPACE or ALTER INDEX statement with the BUFFERPOOL specification. If you're a DBA, that's not so good from a convenience standpoint, as stopping a heavily-accessed table space or index may not be do-able in prime time, even if you issue the -STOP DATABASE command with the AT(COMMIT) option to utilize drain locking. You might have to do the -STOP at night and/or during a weekend. Yuck. Enter DB2 10 (in conversion mode, no less!), and that restriction is gone: you can issue the buffer pool reassigning ALTER TABLESPACE or ALTER INDEX statement at any time, even when DB2 is running in data sharing mode -- the target object does NOT have to be in the stopped state. The change will take effect, as before, when the object's data sets are next opened following the issuance of the ALTER statement.
FOR TABLE SPACES
Reassigning a table space to a buffer pool with a different page size. Before DB2 10, the only way to move a table space from a 4K buffer pool to (for example) an 8K pool (that is, to change the table space's page size) was to unload data from the table space, drop it, re-create it with the larger-page buffer pool specified, and re-load the previously unloaded data. Cumbersome. With DB2 10 (in new-function mode), you can effect this change for a universal table space (one that is not an XML table space) or a LOB table space via an ALTER TABLESPACE statement (specifying the buffer pool with the larger or smaller page size), and the change will take effect the next time the table space (in its entirety) is REORGed with SHRLEVEL CHANGE or REFERENCE (this kind of table space definition modification is known as a "pending" change -- something new introduced with DB2 10, and a continuation of the online schema change capabilities introduced with DB2 V8 and expanded with subsequent DB2 releases). By the way, DB2 10 (in new-function mode) also enables conversion of a simple, segmented, or traditional table-controlled range-partitioned table space to a universal table space via ALTER and REORG.
FOR INDEXES
Pending versus immediate definitional changes. For indexes, the DB2 10 change pertaining to buffer pool reassignment has to timing as it relates to the change actually taking effect. If the change is of the pending variety (new with DB2 10 in new-function mode), it will take effect when the index (in its entirety) is next reorganized via REORG INDEX with SHRLEVEL CHANGE (or via REORG TABLESPACE with SHRLEVEL CHANGE or REFERENCE -- and the REORG TABLESPACE route is required if there are also pending definition changes in existence for the table space associated with the index). Here is what would make an ALTER INDEX with a BUFFERPOOL specification a pending change (and ALL of the following conditions would have to be true):
- The index's data sets have already been created.
- The index is defined on a table in a universal table space, or on an XML table or an auxiliary table associated with a base table that is in a universal table space.
- There are pending definition changes in existence for the index or the associated table space, or the buffer pool to which the index will be reassigned has a page size that is different from that of the index's current buffer pool (index page sizes other than 4K were introduced with DB2 9).
For a pending change, buffer pool reassignment takes place the next time the entire index is reorganized via REORG INDEX with SHRLEVEL CHANGE or REORG TABLESPACE with SHRLEVEL CHANGE or REFERENCE.
If the change is of the immediate variety, and the page size of the "to" buffer pool is the same as that of the "from" buffer pool, reassignment of the index to the pool specified in the ALTER INDEX statement will take place when the index's data sets are next opened after the issuance of ALTER INDEX.
If it's an immediate change and the "to" buffer pool has a page size that is different from that of the "from" buffer pool, the index is placed in REBUILD-pending status and the buffer pool reassignment takes place when the index is rebuilt.
There you have it. I hope that this information will be useful for your buffer pool reassignment changes in a DB2 10 environment (or for DB2 10 migration planning, if you're currently using DB2 9 or DB2 V8).
Wednesday, September 7, 2011
Managing DB2 for z/OS CPU Parallelism for Dynamic SQL
DB2 for z/OS query CPU parallelism (hereinafter referred to simply as "query parallelism"), introduced with DB2 Version 4 in the mid-1990s, enables DB2 to split a query into multiple tasks that can execute concurrently on a mainframe server. The primary benefit of query parallelism is improved performance from an elapsed time perspective (query parallelism tends to increase statement execution CPU time somewhat -- this because DB2 needs to evaluate a candidate statement to determine whether or not parallelism would improve the query's run time, and because DB2 has to manage a parallelized query's "child" tasks). That said, for locally-originating queries (i.e., those issued by programs running on the mainframe server on which the target DB2 subsystem is running), a sometimes equally-appealing benefit is redirection of work to the server's zIIP engines, if the server is configured with zIIPs (queries that come from remote DRDA requesters, processed through DB2's distributed data facility, already benefit, cost-wise, from substantial zIIP offload).
One matter that sometimes comes up when an organization is considering the use of query parallelism concerns management of the DB2 feature's scope-of-use. In other words, people may want some, but not all, queries executed in a given DB2 environment to be candidates for parallelism. This some-but-not-all approach makes sense in certain situations, particularly when there is a desire to utilize query parallelism in a system on which high-volume OLTP and/or batch applications are already running. Mainframe servers that handle such operational workloads often run at high levels of utilization during peak processing hours (sometimes 90% busy or more). On that type of system, the CPU cost of evaluation by DB2 of all queries for estimated parallelism performance benefit, and of management by DB2 of child tasks for queries that are parallelized, could push the server's utilization rate to a level that would negatively impact the performance of the existing operational applications. [Note: for a DB2 subsystem that is dedicated to business intelligence usage (an example being a data warehousing system), it might be appropriate to make all dynamic queries -- and dynamic SQL tends to dominate in BI environments -- candidates for parallelism by changing the value of the DB2 ZPARM parameter CDSSRDEF to ANY from its default value of 1.]
So, if you want some, but not all, of your queries to be candidates for parallelization, how do you go about putting that aim into action? For static SQL, it's easy: identify the program that issues the SELECTs that you'd like DB2 to parallelize, and rebind that package with a specification of DEGREE(ANY). For dynamic SQL, selective implementation of query parallelism might be a little less straightforward, depending on the particulars of the situation. If dynamic queries are issued by a program that an organization's developers have coded, making that program's queries candidates for parallelization would simply require that the program issue the SQL statement SET CURRENT DEGREE = 'ANY' prior to issuing the queries. In other cases, the SET CURRENT DEGREE mechanism may not be applicable. You might have a situation in which dynamic queries are generated and issued by an end-user data query tool, and that tool might not provide a means of setting the value of the CURRENT DEGREE special register to 'ANY'. Or, the dynamic queries you want DB2 to parallelize might be issued by a purchased application for which the vendor does not supply source code, thereby making it impossible to add a SET CURRENT DEGREE statement to a program (and even if you have the source code for a purchased application, modifying that source code could put you in violation of the vendor's support agreement).
What can you do if the SET CURRENT DEGREE = 'ANY' option is not available to you? Here, you have some options with respect to implementing limited-scope parallelism for dynamic queries (i.e., implementing parallelism for queries in a "some-but-not-all" fashion), as explained below. Note that for all three of the parallelism management approaches described below, the value of the CDSSRDEF parameter in ZPARM is assumed to be ANY (in other words, it's assumed that you've opened the door for all-dynamic-query parallelism on that subsystem, and you'll then take one or more steps to close that door for all but a portion of the dynamic query workload).
Limit query parallelism by table -- Suppose that you want to make dynamic queries targeting table ABC candidates for parallelism, while preventing parallelism for dynamic queries accessing other tables. This could be done pretty easily be assigning the table space associated with table ABC to its own buffer pool, and leaving the value of that pool's VPPSEQT parameter at the default of 50 (or specify a value greater than 50, if you want -- more on this momentarily). Set the value of VPPSEQT to zero for your other buffer pools, and parallelism for queries targeting tables in table spaces assigned to those pools will be disabled as a result.
Now, a bit more on VPPSEQT (short for Virtual Pool Parallel SEQuential Threshold): this is one of several DB2 buffer pool parameter specifications that can be altered for a particular pool via execution of an ALTER BUFFERPOOL command. The default value for VPPSEQT, as noted, is 50. This is actually a percentage of the VPSEQT (Virtual Pool SEQuential Threshold) value, which is 80 by default. That 80 is a percentage of the associated pool's buffers, so by default 50% X 80% = 40% of a pool's buffers are available to hold table and/or index pages that are read from disk as a result of parallel-mode query execution. If you change the VPPSEQT threshold for a pool to 60 and leave the VPSEQT threshold at 80 then, for that pool, 60% X 80% = 48% of the buffers will be available to hold pages read into memory in support of parallel-mode query processing. If VPPSEQT is set to zero, no buffers will be available to support parallel-mode query processing, so queries targeting tables in table spaces assigned to that pool will not be candidates for parallelization.
Limit query parallelism by package, collection, etc. -- A lot of people think of DB2's Resource Limit Facility (RLF) only as a means of limiting the amount of CPU time that a dynamic SQL statement can accumulate in execution before it is terminated. That is indeed the most common usage of RLF, and it is actuated through the use of what's called an RLMT table (the name of this table is actually of the form DSNRLMTxx, with xx being a user-specified two-character alphanumeric value). You could alternatively activate an RLST table (DSNRLSTxx), which would allow you to (among other things) deactivate query parallelism for dynamic queries associated with workload components qualified by certain identifiers (options include authorization ID, collection name, and package name -- these correspond to columns of the RLST table). Remember, your specification of ANY for CDSSRDEF in ZPARM opened the door for parallelism candidacy to all dynamic queries -- through the RLST table (via the value '4' in the RLFUNC column) you are closing that door for workloads OTHER than the one for which you want dynamic queries to be candidates for parallelism. Here's an interesting and easy-to-specify set-up: with one row in the RLST table, having '4' in column RLFUNC and no values in any other columns, parallelism will be disabled for dynamic queries that are local to the DB2 subsystem (e.g., issued via SPUFI or QMF or batch jobs), while allowing parallelism for queries issued from remote DRDA requesters (this thanks to a blank value in the LUNAME column of the RLST table, which represents the local location). You can find more information about the RLST table in the DB2 Version 8 Administration Guide, the DB2 9 Performance Monitoring and Tuning Guide, or the DB2 10 Managing Performance manual.
Limit query parallelism by DB2 data sharing group member -- I really like this option, but it does, of course, require that you have DB2 running in data sharing mode on a Parallel Sysplex. The concept here is pretty simple: you take one or more of the members of the data sharing group, and you set the value of the ZPARM parameter CDSSRDEF to ANY on those subsystems (leaving the value of CDSSRDEF at 1 on the other members of the group -- and it's assumed that you'll run your high-volume OLTP and batch processes on the CDSSRDEF = 1 subsystems). If you have "local" programs that issue dynamic queries that you want DB2 to parallelize, run those on the subsystem(s) that has CDSSRDEF = ANY. If you have users and/or applications that issue dynamic queries that access host data via DRDA and DB2's distributed data facility, have these users and apps connect to the CDSSRDEF = ANY subsystem(s). For maximum availability as it pertains to programs and users issuing dynamic queries that are to be candidates for parallelism, it's best to have at least two member DB2 subsystems set up for parallelism-by-default (i.e., with CDSSRDEF = ANY). That way, if one is down due to a planned (e.g., for maintenance) or unplanned outage, the other member(s) can continue to parallelize dynamic queries as desired. This kind of configuration for high availability was made MUCH easier for DRDA requesters when DB2 for z/OS Version 8 introduced a member-subsetting capability. Setting this up would involve creating an alias location name for the DB2 members for which you have CDSSRDEF = ANY, assigning a TCP/IP port to that alias, and updating the DDF record in the respective members' bootstrap data sets with this information (done via DSNJU003, the change log inventory utility). Combine a location alias with the use of dynamic virtual IP addresses (DVIPAs) for DB2 members, and you'll have a high-availability configuration that is simple from the perspective of users and applications connecting to the CDSSRDEF = ANY DB2 subsystems (from a CONNECT TO point of view, there is no need to know the names of individual DB2 members associated with the location alias). All of this is described quite clearly in an excellent IBM "red book" titled DB2 9 for z/OS: Distributed Functions.
And there you have it. You can implement parallelism for dynamic queries in a limited fashion, and you have options when it comes to setting this up. Assess your situation, and go the route that makes sense for your organization.
One matter that sometimes comes up when an organization is considering the use of query parallelism concerns management of the DB2 feature's scope-of-use. In other words, people may want some, but not all, queries executed in a given DB2 environment to be candidates for parallelism. This some-but-not-all approach makes sense in certain situations, particularly when there is a desire to utilize query parallelism in a system on which high-volume OLTP and/or batch applications are already running. Mainframe servers that handle such operational workloads often run at high levels of utilization during peak processing hours (sometimes 90% busy or more). On that type of system, the CPU cost of evaluation by DB2 of all queries for estimated parallelism performance benefit, and of management by DB2 of child tasks for queries that are parallelized, could push the server's utilization rate to a level that would negatively impact the performance of the existing operational applications. [Note: for a DB2 subsystem that is dedicated to business intelligence usage (an example being a data warehousing system), it might be appropriate to make all dynamic queries -- and dynamic SQL tends to dominate in BI environments -- candidates for parallelism by changing the value of the DB2 ZPARM parameter CDSSRDEF to ANY from its default value of 1.]
So, if you want some, but not all, of your queries to be candidates for parallelization, how do you go about putting that aim into action? For static SQL, it's easy: identify the program that issues the SELECTs that you'd like DB2 to parallelize, and rebind that package with a specification of DEGREE(ANY). For dynamic SQL, selective implementation of query parallelism might be a little less straightforward, depending on the particulars of the situation. If dynamic queries are issued by a program that an organization's developers have coded, making that program's queries candidates for parallelization would simply require that the program issue the SQL statement SET CURRENT DEGREE = 'ANY' prior to issuing the queries. In other cases, the SET CURRENT DEGREE mechanism may not be applicable. You might have a situation in which dynamic queries are generated and issued by an end-user data query tool, and that tool might not provide a means of setting the value of the CURRENT DEGREE special register to 'ANY'. Or, the dynamic queries you want DB2 to parallelize might be issued by a purchased application for which the vendor does not supply source code, thereby making it impossible to add a SET CURRENT DEGREE statement to a program (and even if you have the source code for a purchased application, modifying that source code could put you in violation of the vendor's support agreement).
What can you do if the SET CURRENT DEGREE = 'ANY' option is not available to you? Here, you have some options with respect to implementing limited-scope parallelism for dynamic queries (i.e., implementing parallelism for queries in a "some-but-not-all" fashion), as explained below. Note that for all three of the parallelism management approaches described below, the value of the CDSSRDEF parameter in ZPARM is assumed to be ANY (in other words, it's assumed that you've opened the door for all-dynamic-query parallelism on that subsystem, and you'll then take one or more steps to close that door for all but a portion of the dynamic query workload).
Limit query parallelism by table -- Suppose that you want to make dynamic queries targeting table ABC candidates for parallelism, while preventing parallelism for dynamic queries accessing other tables. This could be done pretty easily be assigning the table space associated with table ABC to its own buffer pool, and leaving the value of that pool's VPPSEQT parameter at the default of 50 (or specify a value greater than 50, if you want -- more on this momentarily). Set the value of VPPSEQT to zero for your other buffer pools, and parallelism for queries targeting tables in table spaces assigned to those pools will be disabled as a result.
Now, a bit more on VPPSEQT (short for Virtual Pool Parallel SEQuential Threshold): this is one of several DB2 buffer pool parameter specifications that can be altered for a particular pool via execution of an ALTER BUFFERPOOL command. The default value for VPPSEQT, as noted, is 50. This is actually a percentage of the VPSEQT (Virtual Pool SEQuential Threshold) value, which is 80 by default. That 80 is a percentage of the associated pool's buffers, so by default 50% X 80% = 40% of a pool's buffers are available to hold table and/or index pages that are read from disk as a result of parallel-mode query execution. If you change the VPPSEQT threshold for a pool to 60 and leave the VPSEQT threshold at 80 then, for that pool, 60% X 80% = 48% of the buffers will be available to hold pages read into memory in support of parallel-mode query processing. If VPPSEQT is set to zero, no buffers will be available to support parallel-mode query processing, so queries targeting tables in table spaces assigned to that pool will not be candidates for parallelization.
Limit query parallelism by package, collection, etc. -- A lot of people think of DB2's Resource Limit Facility (RLF) only as a means of limiting the amount of CPU time that a dynamic SQL statement can accumulate in execution before it is terminated. That is indeed the most common usage of RLF, and it is actuated through the use of what's called an RLMT table (the name of this table is actually of the form DSNRLMTxx, with xx being a user-specified two-character alphanumeric value). You could alternatively activate an RLST table (DSNRLSTxx), which would allow you to (among other things) deactivate query parallelism for dynamic queries associated with workload components qualified by certain identifiers (options include authorization ID, collection name, and package name -- these correspond to columns of the RLST table). Remember, your specification of ANY for CDSSRDEF in ZPARM opened the door for parallelism candidacy to all dynamic queries -- through the RLST table (via the value '4' in the RLFUNC column) you are closing that door for workloads OTHER than the one for which you want dynamic queries to be candidates for parallelism. Here's an interesting and easy-to-specify set-up: with one row in the RLST table, having '4' in column RLFUNC and no values in any other columns, parallelism will be disabled for dynamic queries that are local to the DB2 subsystem (e.g., issued via SPUFI or QMF or batch jobs), while allowing parallelism for queries issued from remote DRDA requesters (this thanks to a blank value in the LUNAME column of the RLST table, which represents the local location). You can find more information about the RLST table in the DB2 Version 8 Administration Guide, the DB2 9 Performance Monitoring and Tuning Guide, or the DB2 10 Managing Performance manual.
Limit query parallelism by DB2 data sharing group member -- I really like this option, but it does, of course, require that you have DB2 running in data sharing mode on a Parallel Sysplex. The concept here is pretty simple: you take one or more of the members of the data sharing group, and you set the value of the ZPARM parameter CDSSRDEF to ANY on those subsystems (leaving the value of CDSSRDEF at 1 on the other members of the group -- and it's assumed that you'll run your high-volume OLTP and batch processes on the CDSSRDEF = 1 subsystems). If you have "local" programs that issue dynamic queries that you want DB2 to parallelize, run those on the subsystem(s) that has CDSSRDEF = ANY. If you have users and/or applications that issue dynamic queries that access host data via DRDA and DB2's distributed data facility, have these users and apps connect to the CDSSRDEF = ANY subsystem(s). For maximum availability as it pertains to programs and users issuing dynamic queries that are to be candidates for parallelism, it's best to have at least two member DB2 subsystems set up for parallelism-by-default (i.e., with CDSSRDEF = ANY). That way, if one is down due to a planned (e.g., for maintenance) or unplanned outage, the other member(s) can continue to parallelize dynamic queries as desired. This kind of configuration for high availability was made MUCH easier for DRDA requesters when DB2 for z/OS Version 8 introduced a member-subsetting capability. Setting this up would involve creating an alias location name for the DB2 members for which you have CDSSRDEF = ANY, assigning a TCP/IP port to that alias, and updating the DDF record in the respective members' bootstrap data sets with this information (done via DSNJU003, the change log inventory utility). Combine a location alias with the use of dynamic virtual IP addresses (DVIPAs) for DB2 members, and you'll have a high-availability configuration that is simple from the perspective of users and applications connecting to the CDSSRDEF = ANY DB2 subsystems (from a CONNECT TO point of view, there is no need to know the names of individual DB2 members associated with the location alias). All of this is described quite clearly in an excellent IBM "red book" titled DB2 9 for z/OS: Distributed Functions.
And there you have it. You can implement parallelism for dynamic queries in a limited fashion, and you have options when it comes to setting this up. Assess your situation, and go the route that makes sense for your organization.
Tuesday, August 30, 2011
DB2 for z/OS and SNA: Breaking the Ties that (Used to) Bind
I don't know if there are any industries in which things change as quickly (and constantly) as they do in the realm of information technology. Just a few years ago, clouds were what produced rain, streams were watercourses, and Ruby on Rails might have been a reference to your aunt taking the express train to New York City. Today, those are all IT terms. It can be kind of dizzying, keeping up with all this new stuff.
Fortunately (especially for us old-timers), even in IT there are those technology rocks that endure, providing a sense of stability amidst the swirl of change. Magnetic disk drives are still the dominant data storage media. Various server operating systems -- z/OS (descendant of MVS), UNIX variants, Linux, and Windows -- have stood the test of time. Relational database management systems are still very much mainstream, as challengers that have arisen over the years have faded away or become niche technologies. And then there's SNA, right? Good old Systems Network Architecture, that robust, feature-rich communications protocol that reliably gets data to and from mainframes. We'll always have SNA, won't we? Well, maybe not...
I remember when TCP/IP, the now-pervasive data communications protocol, was first getting a lot of attention in z/OS circles. It was the 1990s. Client-server computing was on the rise, and lots of organizations wanted their mainframes to be the servers on client-server configurations. With TCP/IP being the lingua franca of PCs and distributed systems servers, z/OS had to be able to support this protocol, and in time it did. Still, SNA remained heavily in use in mainframe environments, owing largely to the fact that TCP/IP, at the time, lacked some of the high-availability and diagnostic features of SNA -- features that were held in particularly high regard by the folks responsible for operating data centers that housed mainframe servers. Years passed, and TCP/IP became a much more robust communications protocol, to the extent that organizations could utilize TCP/IP instead of SNA without having to compromise on network robustness and availability. Thus, use of SNA, even in mainframe environments, steadily declined. But you still need SNA, and VTAM (the mainframe communications subsystem that enables the use of SNA), for some aspects of DB2 for z/OS processing, right?
WRONG. Starting with DB2 9 for z/OS running in New Function Mode (NFM), you can actually cut the cord between DB2 and VTAM (and SNA), and go 100% TCP/IP. Technically, this is accomplished by modifying the DDF record in the bootstrap data set (BSDS). How do you do this? The way you make any changes to the BSDS: via the change log inventory utility, also known as DSNJU003 (given what you can do with DSNJU003, it would be nice if it were rechristened as the "change BSDS" utility). Specifically, you submit a DSNJU003 job with a DDF statement. In that DDF statement, specify an IPNAME for the DB2 subsystem (or for the data sharing group, if the DB2 subsystem is a member of a group). IMPORTANT: do NOT add an IPNAME to the DDF record in the BSDS if you need your DB2 subsystem to process DRDA requests coming in over both TCP/IP and SNA. If you need to support both TCP/IP and SNA for DRDA requests, have an LUNAME in the DDF record in the BSDS and ensure that there is NOT an IPNAME in that record. Once the DDF record contains an IPNAME value, DB2 will not even try to activate its SNA communications support, even if some remote client sends a DRDA request to the subsystem using SNA. So, don't add an IPNAME to the DDF record in the BSDS until you are sure that you no longer need SNA support for a DB2 subsystem. Note that in a data sharing group, you could have one or more members supporting only TCP/IP communications, while other members support both TCP/IP and SNA. In that case, the IPNAME in the BSDS DDF record of the TCP/IP-only members must match the GENERIC LUNAME used by the members that are supporting both TCP/IP and SNA communications.
Once you have supplied an IPNAME for a subsystem in its BSDS DDF record (i.e., once you go to TCP/IP-only communications for DRDA requests), you can subsequently execute the DSNJU003 utility with the NOLUNAME option in the DDF statement. This will remove the LUNAME value from the DDF record in the BSDS (if you run DSNJU003 with NOLUNAME, no other DDF parameter specification is allowed -- NOLUNAME has to appear by itself in the DDF statement of the utility).
And there you have it: no LUNAME, no SNA, no association with VTAM. What'll change next? We'll see. At least the IBM logo still has those striped letters. Hold onto that, Big Blue!
Fortunately (especially for us old-timers), even in IT there are those technology rocks that endure, providing a sense of stability amidst the swirl of change. Magnetic disk drives are still the dominant data storage media. Various server operating systems -- z/OS (descendant of MVS), UNIX variants, Linux, and Windows -- have stood the test of time. Relational database management systems are still very much mainstream, as challengers that have arisen over the years have faded away or become niche technologies. And then there's SNA, right? Good old Systems Network Architecture, that robust, feature-rich communications protocol that reliably gets data to and from mainframes. We'll always have SNA, won't we? Well, maybe not...
I remember when TCP/IP, the now-pervasive data communications protocol, was first getting a lot of attention in z/OS circles. It was the 1990s. Client-server computing was on the rise, and lots of organizations wanted their mainframes to be the servers on client-server configurations. With TCP/IP being the lingua franca of PCs and distributed systems servers, z/OS had to be able to support this protocol, and in time it did. Still, SNA remained heavily in use in mainframe environments, owing largely to the fact that TCP/IP, at the time, lacked some of the high-availability and diagnostic features of SNA -- features that were held in particularly high regard by the folks responsible for operating data centers that housed mainframe servers. Years passed, and TCP/IP became a much more robust communications protocol, to the extent that organizations could utilize TCP/IP instead of SNA without having to compromise on network robustness and availability. Thus, use of SNA, even in mainframe environments, steadily declined. But you still need SNA, and VTAM (the mainframe communications subsystem that enables the use of SNA), for some aspects of DB2 for z/OS processing, right?
WRONG. Starting with DB2 9 for z/OS running in New Function Mode (NFM), you can actually cut the cord between DB2 and VTAM (and SNA), and go 100% TCP/IP. Technically, this is accomplished by modifying the DDF record in the bootstrap data set (BSDS). How do you do this? The way you make any changes to the BSDS: via the change log inventory utility, also known as DSNJU003 (given what you can do with DSNJU003, it would be nice if it were rechristened as the "change BSDS" utility). Specifically, you submit a DSNJU003 job with a DDF statement. In that DDF statement, specify an IPNAME for the DB2 subsystem (or for the data sharing group, if the DB2 subsystem is a member of a group). IMPORTANT: do NOT add an IPNAME to the DDF record in the BSDS if you need your DB2 subsystem to process DRDA requests coming in over both TCP/IP and SNA. If you need to support both TCP/IP and SNA for DRDA requests, have an LUNAME in the DDF record in the BSDS and ensure that there is NOT an IPNAME in that record. Once the DDF record contains an IPNAME value, DB2 will not even try to activate its SNA communications support, even if some remote client sends a DRDA request to the subsystem using SNA. So, don't add an IPNAME to the DDF record in the BSDS until you are sure that you no longer need SNA support for a DB2 subsystem. Note that in a data sharing group, you could have one or more members supporting only TCP/IP communications, while other members support both TCP/IP and SNA. In that case, the IPNAME in the BSDS DDF record of the TCP/IP-only members must match the GENERIC LUNAME used by the members that are supporting both TCP/IP and SNA communications.
Once you have supplied an IPNAME for a subsystem in its BSDS DDF record (i.e., once you go to TCP/IP-only communications for DRDA requests), you can subsequently execute the DSNJU003 utility with the NOLUNAME option in the DDF statement. This will remove the LUNAME value from the DDF record in the BSDS (if you run DSNJU003 with NOLUNAME, no other DDF parameter specification is allowed -- NOLUNAME has to appear by itself in the DDF statement of the utility).
And there you have it: no LUNAME, no SNA, no association with VTAM. What'll change next? We'll see. At least the IBM logo still has those striped letters. Hold onto that, Big Blue!
Subscribe to:
Posts (Atom)