By now, you've probably seen and/or heard a good deal of information about DB2 10 for z/OS, which was announced and became generally available this past October. There is indeed a lot of big news associated with this latest release of IBM's mainframe relational database management system: reduced CPU costs, support for temporal data (tables with system and/or business time dimensions), a huge increase in the number of threads that can be concurrently active, a migration path to universal tablespaces, SQL Procedure Language enhancements (better performance, plus the ability to write user-defined functions using SQLPL), table-based data access control policies, and lots more (and I do mean "lots" -- check out the DB2 10 "What's New" manual). Amongst all the "wow" features of DB2 10 are some nice little nuggets of functionality that will positively impact system performance and administration. One of these -- and the subject of this blog entry -- concerns the KEYCARD option of the RUNSTATS utility.
Here's the story in a nutshell: with DB2 10, the KEYCARD option is no longer optional. That is to say, if you run the RUNSTATS utility with an INDEX specification in a DB2 10 environment, and you have indexes with more than two key columns defined on the table (or tables) in the target tablespace, you will get the catalog statistics associated with KEYCARD, regardless of what you specify on the utility control statement. You can still put KEYCARD in this statement, but the keyword will be ignored because it "is now built into the normal execution of the RUNSTATS INDEX utility and cannot be disabled." Trust me: this is a good thing. It's a change that leaders on the optimizer team in IBM's DB2 development organization have lobbied for for quite some time. I'll explain why, and I'll tell you what this means for you if you're not yet running DB2 10 (and that's most of you, as, again, DB2 10 has only been generally available for a couple of months).
First, a little background. KEYCARD (which is only valid in the context of an INDEX specification on a RUNSTATS utility control statement) goes way back. I'm not sure when it was introduced as a RUNSTATS option, but I think that it might have been delivered with DB2 for z/OS Version 4 (mid-1990s). What does it do? Pretty simple: it causes RUNSTATS to determine the number of distinct values of combinations of an index's first "n" key columns, where "n" is greater than 1 and less than the total number of the index's key columns (I say this because the cardinality -- the number of distinct values -- of the index's first key column and of the full key are gathered anyway and placed in the FIRSTKEYCARDF and FULLKEYCARDF columns, respectively, of the SYSIBM.SYSINDEXES catalog table). The values obtained via the KEYCARD specification are placed in the SYSIBM.SYSCOLDIST table.
An example might be helpful here. Suppose you have a 10,000-row table containing data about customers in a particular country (one row per customer), and on that table you have an index with a key comprised of the columns STATE, CITY, and POSTAL_CODE. Suppose further that you have customers in fifty cities, with each city being in a different state (i.e., one city in each of 50 different states -- admittedly, this is a rather contrived example). Finally, assume that there are 200 different zip code values in the table, and that the duplicate values of STATE, CITY, and POSTAL_CODE are evenly spread across the table's rows (for information on non-uniform distribution of non-unique column values, check out an entry I posted last year on that topic in my old Catterall Consulting blog). Without KEYCARD specified, an execution of RUNSTATS will generate (as previously mentioned) FIRSTKEYCARDF (50) and FULLKEYCARDF (200) statistics for the index. With KEYCARD specified, RUNSTATS will also determine the number of distinct values of the index's first two columns: in this case, that's 50 -- the same as the FIRSTKEYCARDF value (if the index had 4 columns, KEYCARD would result in a determination of the number of distinct values of the combination of key columns 1, 2, and 3, as well as the number of distinct values of key columns 1 and 2).
Why is getting this statistical data important? Two words: column correlation. See, if you use what is a fairly common "base" RUNSTATS control statement, namely, TABLE(ALL) INDEX(ALL), DB2 will gather cardinality stats for every column of every table in the target tablespace. It will know, then, that there are 50 distinct values in the STATE column and 50 distinct values in the CITY column. What DB2 doesn't know is that the STATE and CITY values are highly correlated (CITY value 'Los Angeles' is paired with 'California', but not with any other value in the STATE column). Not knowing this, DB2 assumes that the values in the CITY column are independent of STATE values. Consider a query with the compound predicate below:
WHERE STATE = 'California'
AND CITY = 'Los Angeles'
Absent column correlation data, DB2 will estimate the number of qualifying rows by multiplying the cardinality of the STATE column by the cardinality of the CITY column; so, the estimate will be (1/50) * (1/50) * 10,000 rows = 4 rows. In fact, because STATE and CITY values are very highly correlated, the number of qualifying rows will be (1/50) * 10,000 = 200 (again, I'm assuming a uniform distribution of duplicate STATE and CITY values in the table's rows). Lack of correlation stats for the STATE and CITY columns results in a filtering estimate that is off by a factor of 50. When a filtering estimate is way off from reality, the access path chosen by the DB2 optimizer for a query could be sub-optimal, and that path may cause the query to run a lot longer than it needs to. KEYCARD gives DB2 more column correlation information to use in estimating the number of rows qualified by a query's predicates, and better information leads to better access path choices (and better query performance, which of course is the bottom line).
So, why not have RUNSTATS gather correlation stats for all combinations of all columns in a table? That could be done, using COLGROUP specifications for groups of non-indexed columns, but the CPU and run-time costs of doing this would be very high unless the table had very few columns and not many rows (plus, coding all the COLGROUP specifications for a table with a lot of columns would be extremely tedious). In most cases, it just isn't a practical option. Making KEYCARD automatic in DB2 10 is a good idea because it generates data that has significant query optimization value (additional column correlation information for indexes with keys comprised of more than two columns) and does so at a low cost in terms of RUNSTATS CPU and elapsed time (this because the column groups for which correlation data is obtained are "leading and contiguous" subsets of multi-column index keys, making the correlation data cheaply available by way of a table's indexes, in which key values are always strictly ordered).
I'll conclude with a word to the wise: don't wait for DB2 10 to make KEYCARD automatic in your environment. Go ahead and add this option to your RUNSTATS INDEX specifications. If, for example, your basic RUNSTATS control statement includes
TABLE(ALL) INDEX(ALL)
I'd recommend changing that to
TABLE(ALL) INDEX(ALL) KEYCARD
You'll enrich your catalog stats at a low incremental cost, and you'll have taken a step towards smoothing the path to DB2 10, as you'll have in your catalog the correlation stats that will be automatically generated in the DB2 10 environment.
This is the blog of Robert Catterall, an IBM Db2 for z/OS specialist. The opinions expressed herein are the author's, and should not be construed as reflecting official positions of the IBM Corporation.
Thursday, December 30, 2010
Thursday, December 2, 2010
REORG and DB2 for z/OS Partition-by-Growth Table Spaces
A DB2 DBA friend of mine recently told me of a problem he'd encountered in reorganizing a single partition of a partition-by-growth (PBG) table space: the online REORG job failed with an out-of-space condition pertaining to the partition's shadow data set. The topic of partition-level REORGs of PBG table spaces is one that I find interesting; thus, this post, in which I'll explain why my friend hit that out-of-space situation and how he resolved it.
I blogged a couple of times about PBG table spaces (introduced with DB2 9 for z/OS) while I was working as an independent DB2 consultant, in one entry comparing them with partition-by-range (PBR) table spaces and in another briefly mentioning them among several new physical database design options provided by DB2 9. One advantage of a PBG table space versus a traditional segmented table space (PBG table spaces are also segmented) is the ability to run utilities at a partition level (the exception to this rule being LOAD, which has to run at the table space level for a PBG table space). On the surface, the option of REORGing a single partition of a PBG table space looks pretty attractive, but there is a catch, and it does have to do with space, as my DBA friend discovered.
Here's the deal: by definition, a new partition for a PBG table space won't be allocated unless the table space's existing partitions are full. So, if you run an online REORG for, say, partition 4 of a ten-partition PBG table space, that partition is likely to be quite full (depending on delete activity, which might have created some "holes" in pages of the partition, and on the table's clustering key, which would influence placement of newly inserted rows -- more on that momentarily). If the table space was created with non-zero values for PCTFREE and/or FREEPAGE, the REORG utility will attempt to reestablish that free space, and if the partition's data rows and the reestablished free space will not fit in the partition's shadow data set, the REORG job will fail. That was the situation that my friend brought to my attention. What did he do? He resolved the problem by setting the value of a relatively new ZPARM parameter, REORG_IGNORE_FREESPACE (introduced last fall via the fix for APAR PK83397), to YES. When the value of REORG_IGNORE_FREESPACE is YES (the default is NO), REORG TABLESPACE will ignore PCTFREE and FREEPAGE values when reloading data rows into a PBG tablespace if either of the following is true:
If free space is not reestablished, the rows unloaded from a partition will almost certainly fit into the shadow data set during a REORG. [I say "almost" because for a table with varying-length rows (and rows in a compressed table space are varying-length even if there are no VARCHAR columns), I can imagine a very unusual scenario in which rows unloaded from a partition might not fit back in after being sorted in clustering key sequence, even with free space parameter values ignored. So, REORG_IGNORE_FREESPACE = YES should take chances of an out-of-space condition on REORG or a partition of a PBG tablespace way down, if not all the way to zero.]
Now, you might be thinking, "Why is there even the possibility of this out-of-space condition when one partition (or a subset of partitions) of a PBG table space is being reorganized? Can't REORG take rows that won't fit back into the target partition after reestablishing free space and put them in other partitions that have unused space?" The fact of the matter is that REORG doesn't work that way -- it doesn't move rows from one PBG partition to another unless the whole PBG table space is being REORGed.
As a coda, I'll tell you that the DBA to whom I've referred repeatedly in this entry ended up asking whether it was even worth it to reorganize partitions of the table in question. That is a very good question to ask. It's easy to get into a REORG-if-it's-unclustered mindset, but in fact some tablespaces don't need to be REORGed. The one with which the DBA was dealing gets plenty of inserts but is rarely read, and when it is read, performance just has to be OK -- it doesn't have to be great. CPU resources in such a case can be saved by cutting way back on REORG frequency (if not eliminating REORGs altogether) and optimizing insert performance either by switching to a continuously-ascending clustering key or by altering the table with the APPEND YES option introduced with DB2 9 (this causes DB2 to ignore clustering when inserting or loading data into the table). Bottom line: don't REORG a table space (or partitions thereof) "just because." Make sure that the benefits of reorganizing an object justify the CPU (and disk space) used during REORG processing. While keeping an object well organized is usually a good idea, in some situations disorganization really isn't a big deal. Think it through, and act accordingly.
I blogged a couple of times about PBG table spaces (introduced with DB2 9 for z/OS) while I was working as an independent DB2 consultant, in one entry comparing them with partition-by-range (PBR) table spaces and in another briefly mentioning them among several new physical database design options provided by DB2 9. One advantage of a PBG table space versus a traditional segmented table space (PBG table spaces are also segmented) is the ability to run utilities at a partition level (the exception to this rule being LOAD, which has to run at the table space level for a PBG table space). On the surface, the option of REORGing a single partition of a PBG table space looks pretty attractive, but there is a catch, and it does have to do with space, as my DBA friend discovered.
Here's the deal: by definition, a new partition for a PBG table space won't be allocated unless the table space's existing partitions are full. So, if you run an online REORG for, say, partition 4 of a ten-partition PBG table space, that partition is likely to be quite full (depending on delete activity, which might have created some "holes" in pages of the partition, and on the table's clustering key, which would influence placement of newly inserted rows -- more on that momentarily). If the table space was created with non-zero values for PCTFREE and/or FREEPAGE, the REORG utility will attempt to reestablish that free space, and if the partition's data rows and the reestablished free space will not fit in the partition's shadow data set, the REORG job will fail. That was the situation that my friend brought to my attention. What did he do? He resolved the problem by setting the value of a relatively new ZPARM parameter, REORG_IGNORE_FREESPACE (introduced last fall via the fix for APAR PK83397), to YES. When the value of REORG_IGNORE_FREESPACE is YES (the default is NO), REORG TABLESPACE will ignore PCTFREE and FREEPAGE values when reloading data rows into a PBG tablespace if either of the following is true:
- The utility is reorganizing a subset of the table space's partitions
- The table in the PBG table space has one or more LOB columns
If free space is not reestablished, the rows unloaded from a partition will almost certainly fit into the shadow data set during a REORG. [I say "almost" because for a table with varying-length rows (and rows in a compressed table space are varying-length even if there are no VARCHAR columns), I can imagine a very unusual scenario in which rows unloaded from a partition might not fit back in after being sorted in clustering key sequence, even with free space parameter values ignored. So, REORG_IGNORE_FREESPACE = YES should take chances of an out-of-space condition on REORG or a partition of a PBG tablespace way down, if not all the way to zero.]
Now, you might be thinking, "Why is there even the possibility of this out-of-space condition when one partition (or a subset of partitions) of a PBG table space is being reorganized? Can't REORG take rows that won't fit back into the target partition after reestablishing free space and put them in other partitions that have unused space?" The fact of the matter is that REORG doesn't work that way -- it doesn't move rows from one PBG partition to another unless the whole PBG table space is being REORGed.
As a coda, I'll tell you that the DBA to whom I've referred repeatedly in this entry ended up asking whether it was even worth it to reorganize partitions of the table in question. That is a very good question to ask. It's easy to get into a REORG-if-it's-unclustered mindset, but in fact some tablespaces don't need to be REORGed. The one with which the DBA was dealing gets plenty of inserts but is rarely read, and when it is read, performance just has to be OK -- it doesn't have to be great. CPU resources in such a case can be saved by cutting way back on REORG frequency (if not eliminating REORGs altogether) and optimizing insert performance either by switching to a continuously-ascending clustering key or by altering the table with the APPEND YES option introduced with DB2 9 (this causes DB2 to ignore clustering when inserting or loading data into the table). Bottom line: don't REORG a table space (or partitions thereof) "just because." Make sure that the benefits of reorganizing an object justify the CPU (and disk space) used during REORG processing. While keeping an object well organized is usually a good idea, in some situations disorganization really isn't a big deal. Think it through, and act accordingly.
Tuesday, November 16, 2010
An Update on the LASTUSED DB2 Catalog Column
In a couple of entries that I posted to the blog I maintained while working as an independent DB2 consultant (one written this past summer, and the other in the spring of 2009), I mentioned the LASTUSED column of the DB2 for z/OS catalog table SYSIBM.SYSINDEXSPACESTATS. This column was introduced with DB2 9 for z/OS (indeed, the SYSINDEXSPACESTATS table itself was introduced with DB2 9 -- it's the moved-to-the-catalog version of the formerly user-defined INDEXSPACESTATS table, which holds index-related real-time statistics). LASTUSED provided -- finally -- an easy way to identify indexes that are just taking up space and which, therefore, should be candidates for being dropped (and potentially replaced with indexes that actually help query performance). You don't think that finding useless indexes was a challenging task before LASTUSED? It wasn't, if all of your SQL was static -- you'd just look at the SYSPACKDEP catalog table to see if any packages were dependent on a given index. That technique, however, didn't help in environments in which dynamic as well as static SQL statements were executed. How could you determine whether or not an index helped the performance of dynamic SQL statements? Believe me, this was no easy thing. LASTUSED changed that situation. It shows the last date on which an index was used by an SQL DML statement (a static or dynamic SELECT, FETCH or "searched" UPDATE or DELETE -- the latter term referring to UPDATE and DELETE statements containing one or more predicates) or for referential integrity constraint enforcement purposes (you wouldn't want to be searching for foreign key matches without an index on said foreign key). The column has a default value, and if you see that default value in a row for an index in SYSINDEXSPACESTATS, chances are that index is not boosting the performance of any SQL statements.
It's that default value that I want to talk about. See, a whole lot of mainframe DB2 people (including myself, up until about two days ago) think that the default value of the LASTUSED column is '0001-01-01' (i.e., January 1 of the year 1). Believe it or not, the default value of LASTUSED has NEVER been '0001-01-01'. It is, and always has been, NULL. So, if you're running DB2 9 in New Function Mode, and you do a SELECT on SYSIBM.SYSINDEXSPACESTATS, and you see '----------' in the LASTUSED column for an index (a string of dashes is the typical representation of the null value when you query the catalog using a tool such as SPUFI), that's an index that hasn't been used lately to performance-enhancing effect (but give it a little more time, and then do a little checking around before dropping the seemingly useless index, just to make sure that it really isn't needed).
I'll close this entry with an interesting piece of related information: DB2 9.7 for Linux, UNIX, and Windows delivered the LASTUSED catalog column for users of that DB2 platform (it's a column of several catalog views, including SYSCAT.INDEXES). The default value of the column? '0001-01-01'. Gotta love it.
It's that default value that I want to talk about. See, a whole lot of mainframe DB2 people (including myself, up until about two days ago) think that the default value of the LASTUSED column is '0001-01-01' (i.e., January 1 of the year 1). Believe it or not, the default value of LASTUSED has NEVER been '0001-01-01'. It is, and always has been, NULL. So, if you're running DB2 9 in New Function Mode, and you do a SELECT on SYSIBM.SYSINDEXSPACESTATS, and you see '----------' in the LASTUSED column for an index (a string of dashes is the typical representation of the null value when you query the catalog using a tool such as SPUFI), that's an index that hasn't been used lately to performance-enhancing effect (but give it a little more time, and then do a little checking around before dropping the seemingly useless index, just to make sure that it really isn't needed).
I'll close this entry with an interesting piece of related information: DB2 9.7 for Linux, UNIX, and Windows delivered the LASTUSED catalog column for users of that DB2 platform (it's a column of several catalog views, including SYSCAT.INDEXES). The default value of the column? '0001-01-01'. Gotta love it.
Friday, October 29, 2010
DB2 Dispatch from Vegas: IOD, Day Four
I only got to a couple of sessions on this, the last day of IBM's 2010 Information on Demand conference (held in Las Vegas), but they were good ones. Below are some of the highlights of each.
Back to the future: early experiences with temporal data processing in DB2 10 for z/OS. In this session, Shawn Gravelle, IT architect at State Farm (a large insurance company), talked about State Farm's work in testing the new temporal data support delivered by DB2 10. This feature enables DB2 to automatically manage data based on system time (basically, maintaining prior versions of a table's rows as well as the current version), and to facilitate the management of data with a business time perspective (referring to a time period during which data in a row is in effect -- a period that could be in the future). System time and business time can be combined for a table to provide bitemporal data management capabilities.
Among the items of information provided by Shawn during his session:
The IBM Smart Analytics Optimizer query engine. This session was delivered by IBM's Vijayshankar Raman. The Smart Analytics Optimizer (SAO) is, as Vijay put it, "a network-attached accelerator for DB2 for z/OS." What does it accelerate? Queries -- particularly OLAP-type queries (referring to online analytical processing) that involve aggregation and maybe roll-ups, and which scan a lot of data. It attaches to a mainframe system running DB2 for z/OS. A DB2 DBA defines a mart (a potion of the larger DB2 database), tables in the mart are copied to the SAO, and the accelerator does the rest.
Some of the points made by Vijay during his session:
I'm certainly hoping to get the chance to work with a Smart Analytics Optimizer myself in the near future. And, I'm looking forward to next year's Information on Demand conference. I hope that a lot of you will be able to make it to that event.
Back to the future: early experiences with temporal data processing in DB2 10 for z/OS. In this session, Shawn Gravelle, IT architect at State Farm (a large insurance company), talked about State Farm's work in testing the new temporal data support delivered by DB2 10. This feature enables DB2 to automatically manage data based on system time (basically, maintaining prior versions of a table's rows as well as the current version), and to facilitate the management of data with a business time perspective (referring to a time period during which data in a row is in effect -- a period that could be in the future). System time and business time can be combined for a table to provide bitemporal data management capabilities.
Among the items of information provided by Shawn during his session:
- State farm has a LOT of data in their system. As in, more than 5000 terabytes on SAN storage. As in, more data than you'll find in the Library of Congress (for readers outside the US: that's a VERY big library).
- Temporal data support is largely about productivity. It doesn't allow you to do what you couldn't do before. Instead, it allows you to maintain row history and to account for business dates (again, that's "in effect as of such-and-such a date" information) with a lot less SQL -- in application code and/or triggers and/or stored procedures -- than before. That means that functionality can be implemented more quickly than before -- and that's good for business. [Note that temporal data support simplifies SQL used for data retrieval, as well SQL used to change data values.]
- Temporal data support can also be a performance-booster. Shawn mentioned that some processes ran quite a bit more quickly -- in one case, twice as fast -- when DB2 10 temporal data capabilities were used versus having the same functions handled by user code (implemented in the form of triggers and such).
- Implementing system-time support is easy. That's true whether the table in question is new or an existing object.
- Implementing business-time support can be a more challenging endeavor. It's a more complicated concept. As mentioned above, a row could be inserted into a table with an "in effect" business date that's sometime in the future. The row is in one sense current (it's the most recent version, from a system perspective), but in a business sense, it isn't (in other words, the information in the row is not yet active with respect to a policy or a promotion or whatever it represents). Business time can have implications for logical and physical database design, primary keys, and the organization's data archive strategy. With all this said, business time is a powerful capability -- you just want to put plenty of thought into using it. Another point: business time can be implemented for existing tables, but it can be easier to work it into a new application database design.
The IBM Smart Analytics Optimizer query engine. This session was delivered by IBM's Vijayshankar Raman. The Smart Analytics Optimizer (SAO) is, as Vijay put it, "a network-attached accelerator for DB2 for z/OS." What does it accelerate? Queries -- particularly OLAP-type queries (referring to online analytical processing) that involve aggregation and maybe roll-ups, and which scan a lot of data. It attaches to a mainframe system running DB2 for z/OS. A DB2 DBA defines a mart (a potion of the larger DB2 database), tables in the mart are copied to the SAO, and the accelerator does the rest.
Some of the points made by Vijay during his session:
- The SAO can speed the execution of many queries by 10 to 100 times. This thing really screams. One of the keys to the SAO's ability to majorly accelerate "big query" execution is the fact that it operates on compressed data. I'm not talking about compressed as in DB2 compressed tablespaces -- rows from tables in such tablespaces are decompressed for query processing. Vijay noted that in the SAO, "most operations are performed on encoded [i.e., compressed] values." And the degree of compression achieved is enormous -- generally in the range of 8 to 40 times with respect to space reduction.
- The SAO delivers consistent response times for complex, large-scale queries. Most queries directed to the system (and that's an automatic routing) will complete in 10-20 seconds.
- The SAO is disruptive technology that is not disruptive to the DB2 query environment. In putting the SAO to use, there will often be no need to change SQL or applications currently running on the mainframe DB2 system.
- Lots of engines, lots of memory. The SAO hardware is a set of blades installed in IBM blade centers. There can be up to 56 blades in one SAO, and each blade has 48 GB of main memory. In-memory processing (the SAO is a "cache-aware" system) is another part of the SAO performance picture.
- The star schema angle: Generally speaking, the tables in the data mart copied to the SAO will be arranged in a so-called star schema. If the dimension tables in a star schema are particularly large relative to the associated fact table, query acceleration may be a little less dramatic than it would be for a star schema with dimension tables that are relatively small compared to the fact table.
- Not all queries are supported. In many cases, well over 50% of the queries targeting the tables in a mart copied to the SAO will be supported for execution on the SAO (others would be executed by DB2 on the mainframe -- recall that the mart is copied to the SAO, as opposed to being relocated to the SAO). IBM has a tool that can be run on a mainframe DB2 system to estimate the percentage of existing queries that could execute on an SAO.
- The SAO does its thing without the need for a "performance layer" over the tables in the mart. By "performance layer," Vijay meant indexes and materialized query tables. SAO just scans data in tables, but it does it very smartly (using techniques such as partition elimination and simultaneous application of all of a query's predicates), and in a highly parallel, in-memory-leveraging way. The SAO will rewrite a join, by the way, as multiple scans.
I'm certainly hoping to get the chance to work with a Smart Analytics Optimizer myself in the near future. And, I'm looking forward to next year's Information on Demand conference. I hope that a lot of you will be able to make it to that event.
Wednesday, October 27, 2010
DB2 Dispatch from Vegas: IOD, Day Three
Today was a little different for me versus days one and two of IBM's Information on Demand conference in Las Vegas. I attended only one session -- the one in which I presented. Aside from that, my time was spent in meetings and in the Expo Hall. Tomorrow (the last day of the conference) it'll be back to regular session attendance for me.
This entry will be a little on the brief side. I'll provide some of the key points I made in my session, and I'll share information from a meeting I had that may be of interest to people. The overall theme of today's activity for me was business intelligence.
Mainframe DB2 data warehousing. I started my session by going over five factors that I believe are behind the increased incidence of organizations running data warehouses on a DB2 for z/OS platform:
I then proceeded to a discussion of threes, covering my three favorite features of DB2 9 for z/OS (the release that most sites are running) in the areas of data warehouse performance, data warehouse cost efficiency, and data warehouse functionality. The performance list is as follows:
My favorite DB2 9 features for data warehouse cost-efficiency are:
And, my top-three favorite DB2 9 data warehouse functionality features:
I concluded my presentation with some performance monitoring and tuning hints and tips:
The strength of the System z platform, the convenience and simplicity of a BI "appliance." The key meeting on my schedule today involved a discussion of IBM's Smart Analytics System 9600. Demand is growing for preconfigured, optimized, full-function systems that accelerate data warehouse implementations by providing an end-to-end solution "out of the box." The Smart Analytics System 9600 brings this popular approach to the super-scalable, super-secure, super-highly-available System z platform. Everything you need to get productive with business intelligence -- the mainframe server, the z/OS operating system, DB2 for z/OS, InfoSphere Warehouse for DB2 for z/OS (providing, among other things, cubing services), Cognos BI software for advanced analytics, and implementation services -- comes with the System 9600, all at a price that is substantially lower than what you'd pay to get the components separately. Check it out -- it could be a good fit for your organization.
This entry will be a little on the brief side. I'll provide some of the key points I made in my session, and I'll share information from a meeting I had that may be of interest to people. The overall theme of today's activity for me was business intelligence.
Mainframe DB2 data warehousing. I started my session by going over five factors that I believe are behind the increased incidence of organizations running data warehouses on a DB2 for z/OS platform:
- Business intelligence applications have become mission-critical. Companies rely more and more on their data warehouse systems to fuel their decision making. These systems have to be highly available, and System z and z/OS set the standard for up time (the mainframe availability story is more impressive still when several systems are configured in a DB2 data sharing group on a parallel sysplex cluster.
- "Operational" BI is on the rise. Here I'm referring to a data warehouse workload that is a mix of complex, long-running queries and simpler data requests that users expect to complete in a few seconds. Effectively managing performance for such a mixed workload is a challenge on some platforms, but not on the mainframe. The workload management capabilities of the z/OS operating system are second to none.
- Data warehouses are being updated more frequently. The old pattern of query by day, ETL (extract/transform/load) by night no longer holds true at a number of sites. To an increasing extent, data warehouse users want source data changes to be more quickly reflected in the data warehouse database (sometimes the requirement is for data warehouse updates to be made in close to real time with respect to source data changes). When more-frequent updating is required, it can make sense to locate the data warehouse closer to the primary source system, and very often that primary source system is a mainframe.
- Highly robust data access controls are a must. Data warehouses generally contain a lot of very sensitive data, and protecting this data from unauthorized access is an imperative. Take the security features of z/OS, the advanced capabilities of the RACF security management subsystem (or an equivalent third-party product), and DB2's own controls (made more comprehensive in DB2 9 with roles and trusted contexts, and with the new authority levels introduced with DB2 10), and put them together, and you have a platform that stands out from the crowd in terms of its data lock-down capabilities.
- Mainframe people and processes complete the picture. Generally speaking, at sites around the world, the people who support mainframe systems are a highly experienced, "been there / done that" crowd. The processes they've developed over the years around change management, capacity planning, performance monitoring and tuning, and business continuity are rock-solid. Companies trust their mainframe support staffs with their most critical applications and databases, and that's driving BI work to the System z platform.
I then proceeded to a discussion of threes, covering my three favorite features of DB2 9 for z/OS (the release that most sites are running) in the areas of data warehouse performance, data warehouse cost efficiency, and data warehouse functionality. The performance list is as follows:
- Global query optimization. This refers to DB2's ability to do some major transformation, under the covers, of queries that contain subquery predicates. To boost a query's performance, DB2 might change a correlated subquery to a non-correlated subquery (or vice versa). DB2 can also treat the result set of a subquery as a virtual table and join it to an "actual" table in the query's FROM list. The result: potentially large performance improvements with no need to alter the query's SQL code.
- Indexes on column expressions. It used to be that a predicate containing a column expression such as WHERE UPPER(LAST_NAME) = 'JOHNSON' or WHERE SALARY + BONUS < 100000 were non-indexable. DB2 9 enables the creation of indexes on such expressions, resulting in sometimes-huge reductions in query run times.
- Smarter sequential prefetch. Much of the prefetch activity in a DB2 9 environment shifts from "pure" sequential prefetch (now used only for tablespace scans) to dynamic sequential prefetch. Dynamic prefetch can be used for both forward and backward scans (pure sequential prefetch is forward-only), and it tends to make better use of buffer pool resources. With more dynamic prefetch, queries can be expected to run in less time and to consume less in the way of CPU resources.
My favorite DB2 9 features for data warehouse cost-efficiency are:
- Universal tablespaces. Compared to the space map pages in a traditional partitioned tablespace, the space map pages in a universal tablespace (UTS) provide more information about "holes" in data pages that can accommodate to-be-inserted rows that are of varying length (and rows in compressed tablespaces are varying-length, regardless of whether or not they contain any VARCHAR columns) and which need to be inserted "in the middle" of the target table (versus at the end) to maintain clustering sequence. Because of this additional information about space in data pages, insert processes can execute with reduced CPU cost when populating tables in universal tablespaces.
- Index compression. At some sites, indexes take up more disk space than tables. DB2 9 index compression can reduce disk space requirements for an index by 50-70%, with relatively little cost in the way of CPU overhead (often in the low single digits of percent).
- Unused index detection. Dynamic SQL dominates in most DB2 for z/OS data warehouse systems, and it used to be very difficult to identify indexes that are not being used to improve the performance of any queries. DB2 9 introduced a new column, LASTUSED, in the real-time statistics table SYSIBM.SYSINDEXSPACESTATS. This column shows the last time that an index was used by a SELECT or FETCH statement, or by an UPDATE or a DELETE statement with predicates, or to check on a referential integrity constraint. Indexes that have not been used for any such purpose will have the default value or 1/1/0001 in the LASTUSED column. These are indexes that can probably be deleted to reduce disk space consumption and lower CPU costs (unneeded indexes drive up the cost of data change operations).
And, my top-three favorite DB2 9 data warehouse functionality features:
- INSTEAD OF triggers. Many views, such as those on multi-table joins, are read-only. INSTEAD OF triggers can be used to enable updates through such views, allowing developers to reference the same object for both data retrieval and data change operations.
- INTERSECT and EXCEPT. These two new set operators (added to the UNION operator, which has been around for a long time) make it much easier to get result sets based on the comparison of two different query result sets. INTERSECT instructs DB2 to retrieve rows from result set A that also appear in result set B. EXCEPT can be used to get rows from result set A that do not appear in result set B.
- RANK, DENSE_RANK, and ROW_NUMBER. These so-called OLAP functions make it easy to assign numeric values to result set rows based on user-specified ordering criteria (which could be quite different from the ORDER BY sequence specified at the "end" of a query). Consider a set of rows to be numbered based on descending unit sales amounts. RANK could have gaps in the number values if there are "ties" (if row one has a unit sales figure of 1000 and both rows two and three have a unit sales figure of 900, rows two and three will both have a RANK or 2 and row four, with a unit sales figure of 800, will have a RANK of 4). DENSE_RANK leaves no gaps in the value of assigned numbers (given the example in the previous parenthetical phrase, rows two and three would have a DENSE_RANK value of 2 and row four would have a DENSE_RANK value of 3). ROW_NUMBER assigns numbers in sequence with no gaps and no repeats (so rows two and three as previously described with get ROW_NUMBER values of 2 and 3, respectively, even though they both have the same unit sales value of 900).
I concluded my presentation with some performance monitoring and tuning hints and tips:
- Make DB2 buffer pools big if you have a server with a lot of memory. This may sound like a statement of the obvious, but a lot of folks have pretty small buffer pools in spite of having DB2 running on a mainframe server with tens of gigabytes of memory. I blogged on buffer pool sizing a few weeks ago.
- Page-fix the buffers in heavily used buffer pools. This move can deliver some nice CPU savings. I blogged on this topic earlier this year.
- Take advantage of tools for query performance analysis. I'm a big fan of IBM's free and downloadable Optimization Service Center for DB2 for z/OS.
- Try executing a long-running, complex query in "chunks" to isolate a performance problem. Suppose that a query with a number of local (i.e., non-join) subquery predicates runs in 30 minutes. Try executing the query minus one of these subquery predicates. Then try it with that predicate back in the picture but without another of the subquery predicates. Keep doing this to see if the query's run time drops significantly after one of the subquery predicate has been removed. If that happens, focus your tuning efforts on that subquery predicate.
The strength of the System z platform, the convenience and simplicity of a BI "appliance." The key meeting on my schedule today involved a discussion of IBM's Smart Analytics System 9600. Demand is growing for preconfigured, optimized, full-function systems that accelerate data warehouse implementations by providing an end-to-end solution "out of the box." The Smart Analytics System 9600 brings this popular approach to the super-scalable, super-secure, super-highly-available System z platform. Everything you need to get productive with business intelligence -- the mainframe server, the z/OS operating system, DB2 for z/OS, InfoSphere Warehouse for DB2 for z/OS (providing, among other things, cubing services), Cognos BI software for advanced analytics, and implementation services -- comes with the System 9600, all at a price that is substantially lower than what you'd pay to get the components separately. Check it out -- it could be a good fit for your organization.
DB2 Dispatch from Vegas: IOD, Day Two
Greetings again from the 2010 IBM Information on Demand conference (IOD) in Las Vegas. Lots of good DB2 10 for z/OS stuff today. Some highlights of sessions I attended:
Getting the most out of native SQL procedures. This presentation was delivered by Pallavi Priyadarshini and Maryela Weihrauch of IBM's Silicon Valley Lab (the home of DB2 for z/OS). I have a major interest in the topic, having blogged about native SQL procedures a number of times while working as an independent DB2 consultant (SQL procedures are stored procedures written entirely in SQL, and native SQL procedures, introduced with DB2 9 for z/OS, have no external-to-DB2 executable -- they execute as packages in the DB2 database services address space, aka DBM1):
Among the many excellent nuggets of information delivered in Pallavi and Maryela's IOD session are the following:
New DB2 10 for z/OS security features to help satisfy your auditor. Long title, yes, but a very good session delivered by DB2 developers Jim Pickel and Gayathiti Chandran. Jim started out by telling attendees that a goal in developing the security-related functionality introduced with DB2 10 was to enable organizations to meet compliance requirements with little or no application code changes. Some highlights of the session:
What's new in DB2 10 for z/OS for DBAs. Jim Teng, IBM Distinguished Engineer, longtime leading member of the DB2 development team, and a longtime friend, gave an excellent presentation on DB2 10 features that DBAs are going to like. These features include the following:
DB2 10 for z/OS performance improvement. Akiko Hoshikawa joked that her husband tells her that her voice puts him to sleep, then proceeded to get an hold our attention with a lot of good information pertaining to performance improvements delivered by DB2 10. Some notes from the session:
Time to call it a day. Back with more tomorrow.
Getting the most out of native SQL procedures. This presentation was delivered by Pallavi Priyadarshini and Maryela Weihrauch of IBM's Silicon Valley Lab (the home of DB2 for z/OS). I have a major interest in the topic, having blogged about native SQL procedures a number of times while working as an independent DB2 consultant (SQL procedures are stored procedures written entirely in SQL, and native SQL procedures, introduced with DB2 9 for z/OS, have no external-to-DB2 executable -- they execute as packages in the DB2 database services address space, aka DBM1):
- http://catterallconsulting.blogspot.com/2007/12/what-are-your-plans-for-db2-for-zos.html
- http://catterallconsulting.blogspot.com/2008/11/db2-9-for-zos-stored-procedure-game.html
- http://catterallconsulting.blogspot.com/2009/02/another-advantage-of-db2-for-zos-v9.html
- http://catterallconsulting.blogspot.com/2009/06/db2-stored-procedures-cobol-and-result.html
- http://catterallconsulting.blogspot.com/2009/07/migrating-db2-9-for-zos-native-sql.html
- http://catterallconsulting.blogspot.com/2009/08/db2-for-zos-stored-procedures-and-large.html
- http://catterallconsulting.blogspot.com/2010/04/using-db2-stored-procedures-part-1.html
- http://catterallconsulting.blogspot.com/2010/04/using-db2-stored-procedures-part-2.html
- http://catterallconsulting.blogspot.com/2010/04/using-db2-stored-procedures-part-3.html
Among the many excellent nuggets of information delivered in Pallavi and Maryela's IOD session are the following:
- How does the package of a native SQL procedure differ from that of an external stored procedure? The difference is basically the part that contains the SQL control language statements (these statements -- examples being IF, LOOP, and ITERATE -- provide logic flow control). These go into a part of the stored procedure package called Section 1. Almost all (about 90%) of the Section 1 part of a native SQL procedure package goes above the 2GB bar in the DB2 DBM1 address space, so you shouldn't be concerned about the virtual storage impact of native SQL procedure utilization.
- Note that the value of ASUTIME (the limit on the mainframe service units that a stored procedure can consume before it is canceled) defaults to NO LIMIT. Specifying a value for ASUTIME, even if it is ten or more times greater than what the stored procedure is expected to consume, provides means of keeping a stored procedure from getting into an infinite loop. It might be a good idea to specify an ASUTIME value if the procedure contains a loop operation (maybe via IF, or LOOP, or REPEAT) -- just in case a logic error causes DB2 to fail to exit the loop.
- External SQL procedures do not return unhandled warnings (i.e., warnings for which you don't have a condition handler) to the calling program. Native SQL procedures do.
- Syntax changes might be required when converting an external SQL procedure to a native SQL procedure. To help identify such changes, IBM has provided a HOST(SQLPL) precompiler that can be used to perform a checkout inspection for an SQL procedure definition, validating that the source is properly coded to conform with native SQL PL syntax and behaviors. This function is built into DB2 10, and it will be made available for DB2 9 via the fix for APAR PM13844 (expected to be available sometime in December). Per the APAR text, the HOST(SQLPL) checkout "is a recommended step to perform for all native SQL procedure development. It is a vital step to perform prior to any migration of an external SQL procedure to a native SQL procedure."
- When a stored procedure returns a result set to a caller, keep in mind that virtual storage will be used for that cursor and will not be released until commit. So, ensure that programs -- even read-only programs -- that fetch results from a cursor declared in a stored procedure issue a commit when done fetching rows (best practice is for the caller to also explicitly close the open cursor). Note that COMMIT ON RETURN in a stored procedure's definition is a way to ensure that a commit happens upon completion of the procedure's execution, but if you use that option for a stored procedure that returns a result set to the caller, ensure that the cursor opened in the stored procedure is declared WITH HOLD; otherwise, the automatic commit will close the cursor.
- With DB2 10, the XML data type is valid for stored procedure parameters and variables (prior to DB2 10, you had to use -- depending on the size of the XML document -- a VARCHAR or a LOB data type to work with an XML document in a stored procedure.
- DB2 10 enables the assignment of values to multiple parameters or variables in a stored procedure with a single SET statement. This "chained SET" capability can improve stored procedure performance.
- Testing by DB2 development of native SQL procedures in a DB2 10 for z/OS environment has shown improvements in CPU efficiency of 10-20% versus a DB2 9 system.
- SQL Procedure Language (aka SQLPL -- the SQL statements used to code a SQL stored procedure) can be used to create SQL user-defined functions (UDFs) in a DB2 10 environment. This is a very good thing, because the functionality of SQL UDFs is extremely limited in DB2 9 (and prior release) systems.
New DB2 10 for z/OS security features to help satisfy your auditor. Long title, yes, but a very good session delivered by DB2 developers Jim Pickel and Gayathiti Chandran. Jim started out by telling attendees that a goal in developing the security-related functionality introduced with DB2 10 was to enable organizations to meet compliance requirements with little or no application code changes. Some highlights of the session:
- New authorities provided by DB2 10 facilitate minimization of the use of super-user authorization levels such as SYSADM. Specifically, the new System DBADM authority is likely to be used extensively in place of SYSADM authority. System DBADM authority can be granted with or without data access privileges, and with or without access control privileges (the latter referring to the ability to execute GRANT and REVOKE statements). Interestingly, System DBADM enables execution of most CREATE, ALTER, and DROP statements, but not those that would affect system or security objects. In addition, someone with System DBADM authority would need additional privileges to create objects such as views, functions, and triggers. That said, a System DBADM can execute what are called system-defined routines (such as stored procedures and UDFs), and routines created by someone with install SYSADM authority are considered to be system-defined routines. This, someone with a super-user authority can create administrative routines that could be executed -- but not dropped or modified -- by a person with System DBADM authority.
- Another new authority, EXPLAIN, allows a person to EXPLAIN SQL statements (dynamic statements or statements embedded in programs) without having the authority to execute those statements. The new SQLADM authority adds privileges to those available via the EXPLAIN authority. For example, someone with SQLADM authority could issue an EXPLAIN for the dynamic statement cache, again without having the privileges needed to execute those statements.
- Additionally, DB2 10 enables the separation of responsibility for managing access to restricted data from the owner of that data. A recommended best practice now is to have roles own objects, as opposed to having objects owned by individual authorization IDs. The CATMAINT utility can be used to switch ownership of an existing object to a role, and this can be done in such a way that programs with SQL statements referencing qualified object names do not have to be changed.
- Auditing capability has been significantly enhanced with DB2 10. Now, all dynamic access to data (versus just the first access with the DB2 9 audit functionality) and any use of privileged authority can be included in the audit trail. More good news: this enhanced auditing capability can be utilized without having to specify AUDIT in the to-be-audited object's DDL. Once an audit policy has been defined, it is "turned on" via a -START TRACE command that names the audit policy (DB2 will turn on, under the covers, the trace record types, or IFCIDs, that are needed to get the information requested in the policy -- that's good for ease of use). Audit information is recorded in SMF records. The DB2 utility DSN1SMFP can be used to print formatted SMF records in a report.
- Auditing of course involves some overhead, so it is recommended that audit policies be defined to track access to specific tables from specific programs. The fact that wildcarding can be used for schema and table names makes it easier to tailor audit policies.
- Besides tracking access to data, an audit policy can be set up to record each use of an authority or authorities.
- The temporal data capability built into DB2 10 (a first in the industry, by the way) makes it much easier to track changes to data records in DB2 tables (this by exploiting the new "system time" characteristic that can be built into a new table or easily added to an existing table).
- You now how passwords have been 8 characters in the mainframe world, like, forever? DB2 10 supports the "password phrase" functionality introduced with z/OS V1R10. A password phrase can be up to 100 characters in length.
- DB2 10 also supports the distributed identity functionality introduced with z/OS V1R11. This can make for a more-robust security set-up in a client-server environment.
- My favorite part of the presentation was the material on the new table-access control features of DB2 10 for z/OS. Why did I so enjoy hearing about this? Because I believe that table access controls could take the place of the view-based security set-up that is often found in a data warehouse system. I've run up against "security views" on more than one occasion, and they can be a big headache, with views on top of views on top of other views, creating a security scheme that is highly complex and which can sometimes get in the way of good query performance (and try digging through these layers of views to see how the associated predicates figure into the performance characteristics of a query that targets one of the top-level views). In a DB2 10 system, table controls can be implemented via DDL with no requirement for overlying views. The controls, which -- after being defined via new SQL statements CREATE PERMISSION (for row-access restrictions) and CREATE MASK (for column-access protection) -- are activated with an ALTER TABLE statement, can determine access to data based on any of several ID types, such as primary authorization ID, SQL ID, and role. The controls affect static and dynamic SQL statements, and they can be used to return "mask" data values to querying users (a mask is defined by way of an SQL case expression). If you have security views implemented in your environment today, read up on CREATE PERMISSION and CREATE MASK, and start thinking about how you might be able to eliminate your security views in favor of DB2 10 table controls.
What's new in DB2 10 for z/OS for DBAs. Jim Teng, IBM Distinguished Engineer, longtime leading member of the DB2 development team, and a longtime friend, gave an excellent presentation on DB2 10 features that DBAs are going to like. These features include the following:
- More online schema changes. The concept of "deferred" changes is introduced with DB2 10: you issue an ALTER TABLESPACE or ALTER INDEX statement to specify a change, and that change is put into effect by a subsequent online REORG of the target object. In this way, you can change the page size, DSSIZE, or SEGSIZE of a universal tablespace (UTS); the page size of an index; and the MAXPARTITIONS value for a partitioned tablespace. You can also convert a single-table simple or segmented tablespace to a UTS. MEMBER CLUSTER can be implemented for a partitioned-by-range (PBR) or partitioned-by-growth (PBG) UTS, or for a "classic" partitioned tablespace.
- Note that a pending object change can be dropped via ALTER before being realized through a REORG -- kind of like an "I changed my mind" capability.
- One DB2 10 ALTER TABLESPACE or ALTER INDEX statement cannot contain both deferred and immediate actions; furthermore, many immediate-effect DDL statements cannot be executed while there is pending DDL waiting to be put into effect via online REORG.
- There are also some new immediate-effect online schema changes made possible by DB2 10. Among these changes are the addition of INCLUDE columns to an existing unique indexed key (though the benefit of increased incidence of index-only access a REBUILD INDEX and a rebind (if we're talking about static SQL). Also immediate is the change to HASH for a UTS (though the table's rows can't be accessed by way of the hash key until the tablespace has been REORGed. Similarly, a change to inline LOBs for a tablespace is immediate, but existing LOB values that can be in-lined won't be until the tablespace is REORGed.
- With DB2 10, when a tablespace or index is altered to reassign it to a different buffer pool (with the same page size), that change takes effect immediately, with no need to stop and start the object (in a pre-DB2 10 data sharing environment, the object had to be stopped BEFORE the ALTER with the BUFFERPOOL specification was issued). Any application processes accessing the target object are quiesced with a DRAIN(ALL), and the new buffer pool is used following commit processing.
- When data is inserted into a table with multiple indexes (two or more if a MEMBER CLUSTER tablespace or an APPEND table; otherwise, more than two), DB2 10 will update the table's indexes in parallel. For a table with quite a few indexes, this can have a big and positive effect on the performance of INSERT processes (some tests have shown an elapsed time improvement of 50% or more).
- DB2 10 will cache (i.e., remember the address in the buffer pool of) the root page of an index when the index is opened. This will eliminate one GETPAGE/release page operation for every subsequent access to the index (good for performance).
- DB2 10 can use list prefetch to efficiently retrieve entries from a disorganized index, thereby reducing the need to frequently reorganize indexes.
- A new DB2 10 trace record, IFCID 359, can be used to monitor the performance (including the elapsed time) of index page split operations.
- Avoidance of full buffer pool scans in a data sharing environment when, for example, a data set is closed, can significantly improve performance for large (i.e., 5 GB or more) buffer pools in a DB2 10 environment (and I'm talking here about the size of an individual pool, not the aggregate size of all pools in a configuration).
- A new buffer pool parameter setting, PGSTEAL(NO), can be used to "pin" a table in memory in a DB2 10 system. With PGSTEAL(NO) in effect, all of a table's pages will be read into memory, using prefetch engines, when the underlying data set is opened. Thereafter, prefetch is turned off for the table (so, make sure that the pool has enough buffers to hold the pages of objects assigned to it; otherwise, you could see a lot of random reads).
- The new "currently committed" locking option available with DB2 10 will enable an application to read previously committed data rows instead of waiting for inserting or deleting processes to release locks on pages or rows (this option can only be used with universal tablespaces). An application for which this option is in effect will still have to wait on locks held by data-updating processes.
- The auto-compress feature of DB2 10 enables a compression dictionary to be built for a tablespace based on INSERT activity, versus having to wait for a LOAD or a REORG. The dictionary is built by an asynchronous DB2 task. The number of row inserts needed to build a dictionary will vary -- it's based on real-time statistics information.
- DB2 10 supports the use of FlashCopy at the data set level.
- The catalog and directory objects in a DB2 10 system will be DB2- and SMS-managed (meaning, among other things, that they are STOGROUP-defined -- a ZPARM lets you specify a default STOGROUP to be used for this purpose).
DB2 10 for z/OS performance improvement. Akiko Hoshikawa joked that her husband tells her that her voice puts him to sleep, then proceeded to get an hold our attention with a lot of good information pertaining to performance improvements delivered by DB2 10. Some notes from the session:
- The virtual storage constraint relief provided by DB2 10 allows, among other things, a larger MAXKEEPD value (relevant for packages bound with KEEPDYNAMIC(YES)). This means more avoidance of "short prepares," and that means better performance. In one SAP test, CPU time was reduced by 26%, just by increasing the value of the MAXKEEPD parameter in ZPARM to 64,000 from 8,000.
- The 1 MB real storage page frame size provided by the z10 and z196 (zEnterpise) servers (backed by 256 contiguous 4K real storage frames) boosts performance by reducing the incidence of "misses" versus "hits" in the translation lookaside buffer (this buffer is used to speed up the translation of virtual storage addresses to real storage addresses). Observed CPU savings tend to be in the 1-4% range. Utilization of 1 MB page frames requires a change to LFAREA in IEASYSxx, followed by an IPL.
- Akiko pointed out that the z196 mainframe supports up to 3 TB of memory.
- In a DB2 10 system, RUNSTATS will collect KEYCARD data by default.
- Hash access to data -- a new DB2 10 feature that causes data rows in a table to be physically placed based on the hashing of a unique key value -- is great for single-row retrieval. The flip side? It can be pretty bad for sequential data retrieval. So, choose carefully when deciding on tables for which you want to use hash access.
Time to call it a day. Back with more tomorrow.
Tuesday, October 26, 2010
DB2 Dispatch from Vegas: IOD, Day One
The sun is starting to set in a typically cloudless sky in Nevada's southern desert, and day one of IBM's annual Information on Demand conference (aka IOD) is just about done. Herewith are a few notes on today's goings-on in Las Vegas. I'll blog at the conclusion of days 2, 3, and 4, as well, and I hope that you'll check out those posts. On now to the rundown:
There are a LOT of people here. The conference got over 10,000 registrants this year. The arena-like events center in which this morning's grand opening session was held was pretty well packed. Fajitas for 10,000 (excellent guacamole) was an impressive operation.
A lot of information was packed into a fast-paced, 90-minute opening session. The emcee kicked things off with a nugget from a pre-conference survey: respondents indicated that the number one business challenge facing their organization is the need to cut costs and increase profits. I see the "cut costs" imperative as a lingering effect of the Great Recession out of which we're (thankfully) climbing. The "increase profits" goal is, I hope, indicative of a return to top-line growth as a focus of senior managements' attention -- companies are shifting, I think, from "survive" mode to a "drive to thrive." An overarching theme of this year's IOD event is the leveraging of information and analytics to "gain insight and optimize results."
Robert LeBlanc, IBM's Senior VP of Middleware Software, led most of the remainder of the opening session. Robert noted that a recent IBM survey of CEOs around the world uncovered three primary focus areas: 1) creative leadership, 2) reinvention of customer relationships, and 3) building operational dexterity. Taking advantage of data assets can lead to success in these areas, but first an organization has to get its data house in order, and that can be a tall task in light of the ongoing -- and accelerating -- information explosion: LeBlanc mentioned that in 2009, there were about 800,000 petabytes of information in computer systems around the world (80% of it unstructured). It's expected that by 2020 that number will be around 35 zettabytes (a zettabyte is 1 sextillion bytes, or a billion terabytes). That's a 44X increase in 11 years. Is the growing pile of data in organizations' systems effectively managed today? In many cases, the answer to that question is "no": 72% of senior executives indicated through a survey that they can get information more easily from the Internet than from their own applications.
Analytics, said LeBlanc, is all about using information. Information management, the foundation for successful analytics, has much to do with data governance (about which I wrote in a recent article published in IBM Data Management Magazine) and the establishment of a "single version of the truth" with regard to questions that might be answered based on the organization's data assets. Put the two together, and you can get some impressive business results:
Next up was Arvind Krishna, IBM's General Manager of Information Management Software. Arvind spoke of how organizations are using modern information management technology to:
The opening session concluded with a panel discussion involving several IBM software executives, Arvind among them. Topics covered by the panel included:
DB2 for z/OS trends and directions. Rick Bowers, IBM Director of DB2 for z/OS Development, led this session. He reminded attendees of the recent announcement of DB2 10 for z/OS (October 19), and the very soon thereafter general availability date for the product (IBM started taking orders for DB2 10 on October 22). Interest in this new release of DB2 is very high in the user community, driven largely by the 5-10% CPU cost reduction delivered by DB2 10 "right out of the box" (and requiring only a rebind of existing DB2 packages). Improved scalability is another key DB2 10 benefit, with the expectation being that organizations will generally see a five- to ten-times increase in the number of users that can be concurrently connected to, and active on, a DB2 10 subsystem (it was mentioned that the number of concurrent DB2-accessing SAP application users would likely be five- to six-times greater on a DB2 10 subsystem versus previous product releases).
Rick talked about the "skip-release" migration path that would enable an organization running DB2 for z/OS V8 to go directly to DB2 10, without having to migrate to DB2 9 in-between. He said that skip-release migration had been successfully tested during the DB2 10 Beta program, and he suggested that the decision by a DB2 V8-using organization as to whether or not to go this route would depend on the organization's risk profile. "Early adopter" companies might opt to go straight from DB2 V8 to DB2 10, while late adopters would more likely migrate first to DB2 9 and then later to DB2 10.
Rick pointed out that 22 of the 25 DB2 10 Beta customers are planning to migrate DB2 10 to production (or at least to get the production migration process underway) during the 2011 calendar year.
Paul Gatan, Director of Systems Support at DST Systems, an early evaluator of DB2 10, joined Rick to talk about his company's DB2 10 experiences. DST has a large and very active (particularly from a data-change perspective) DB2 for z/OS database. One of the tables in that database has 61 billion rows and holds 9.5 terabytes of data. Paul highlighted the following in speaking of DST Systems' work with DB2 10:
DB2 10 for z/OS technical overview. Jeff Josten, an IBM Distinguished Engineer and a long-time member of the DB2 for z/OS development organization, delivered this presentation. Some highlights:
There are a LOT of people here. The conference got over 10,000 registrants this year. The arena-like events center in which this morning's grand opening session was held was pretty well packed. Fajitas for 10,000 (excellent guacamole) was an impressive operation.
A lot of information was packed into a fast-paced, 90-minute opening session. The emcee kicked things off with a nugget from a pre-conference survey: respondents indicated that the number one business challenge facing their organization is the need to cut costs and increase profits. I see the "cut costs" imperative as a lingering effect of the Great Recession out of which we're (thankfully) climbing. The "increase profits" goal is, I hope, indicative of a return to top-line growth as a focus of senior managements' attention -- companies are shifting, I think, from "survive" mode to a "drive to thrive." An overarching theme of this year's IOD event is the leveraging of information and analytics to "gain insight and optimize results."
Robert LeBlanc, IBM's Senior VP of Middleware Software, led most of the remainder of the opening session. Robert noted that a recent IBM survey of CEOs around the world uncovered three primary focus areas: 1) creative leadership, 2) reinvention of customer relationships, and 3) building operational dexterity. Taking advantage of data assets can lead to success in these areas, but first an organization has to get its data house in order, and that can be a tall task in light of the ongoing -- and accelerating -- information explosion: LeBlanc mentioned that in 2009, there were about 800,000 petabytes of information in computer systems around the world (80% of it unstructured). It's expected that by 2020 that number will be around 35 zettabytes (a zettabyte is 1 sextillion bytes, or a billion terabytes). That's a 44X increase in 11 years. Is the growing pile of data in organizations' systems effectively managed today? In many cases, the answer to that question is "no": 72% of senior executives indicated through a survey that they can get information more easily from the Internet than from their own applications.
Analytics, said LeBlanc, is all about using information. Information management, the foundation for successful analytics, has much to do with data governance (about which I wrote in a recent article published in IBM Data Management Magazine) and the establishment of a "single version of the truth" with regard to questions that might be answered based on the organization's data assets. Put the two together, and you can get some impressive business results:
- Avis Europe realized a 50% decrease in marketing costs through better targeting.
- The State of New York has avoided $1.2 billion in questionable tax refunds since 2004 through the use of analytics.
- Newell Rubbermaid got business intelligence queries to run 30 to 50 times faster through reengineering of their decision support system.
Next up was Arvind Krishna, IBM's General Manager of Information Management Software. Arvind spoke of how organizations are using modern information management technology to:
- Reduce customer "churn" through enhanced customer understanding.
- Slash loan origination times through collaborative decisioning.
- Achieve tremendous ROI through improved supply chain visibility.
The opening session concluded with a panel discussion involving several IBM software executives, Arvind among them. Topics covered by the panel included:
- The challenge posed by the information explosion on the one hand, and the need for a single version of the truth on the other (successfully dealing with this challenge often involves analysis of an organization's "information supply chain").
- Workload-optimized systems (integrated hardware/software offerings that are optimized for a specific workload), which are delivering both improved performance and greater ease-of-use for a variety of companies.
- The need to get started on building an analytics capability ("it's not a spectator sport"), with the right choice for a pilot being something that matters to the business.
- The usefulness of an information agenda, the "external artifact" that can bring the IT and business parts of an organization together ("every organization has an IT strategy that is really an application agenda -- what's the information agenda?").
DB2 for z/OS trends and directions. Rick Bowers, IBM Director of DB2 for z/OS Development, led this session. He reminded attendees of the recent announcement of DB2 10 for z/OS (October 19), and the very soon thereafter general availability date for the product (IBM started taking orders for DB2 10 on October 22). Interest in this new release of DB2 is very high in the user community, driven largely by the 5-10% CPU cost reduction delivered by DB2 10 "right out of the box" (and requiring only a rebind of existing DB2 packages). Improved scalability is another key DB2 10 benefit, with the expectation being that organizations will generally see a five- to ten-times increase in the number of users that can be concurrently connected to, and active on, a DB2 10 subsystem (it was mentioned that the number of concurrent DB2-accessing SAP application users would likely be five- to six-times greater on a DB2 10 subsystem versus previous product releases).
Rick talked about the "skip-release" migration path that would enable an organization running DB2 for z/OS V8 to go directly to DB2 10, without having to migrate to DB2 9 in-between. He said that skip-release migration had been successfully tested during the DB2 10 Beta program, and he suggested that the decision by a DB2 V8-using organization as to whether or not to go this route would depend on the organization's risk profile. "Early adopter" companies might opt to go straight from DB2 V8 to DB2 10, while late adopters would more likely migrate first to DB2 9 and then later to DB2 10.
Rick pointed out that 22 of the 25 DB2 10 Beta customers are planning to migrate DB2 10 to production (or at least to get the production migration process underway) during the 2011 calendar year.
Paul Gatan, Director of Systems Support at DST Systems, an early evaluator of DB2 10, joined Rick to talk about his company's DB2 10 experiences. DST has a large and very active (particularly from a data-change perspective) DB2 for z/OS database. One of the tables in that database has 61 billion rows and holds 9.5 terabytes of data. Paul highlighted the following in speaking of DST Systems' work with DB2 10:
- They really liked the "full" 64-bit addressing capability of DB2 10 (64-bit addressing, introduced for DB2 for z/OS with Version 8 of the product, was further exploited with DB2 9 and even more fully utilized with DB2 10).
- Use of the new z/OS 1 MB page size (DB2 page sizes remain as they were in previous releases -- DB2 10 maps multiple of its pages to one one-megabyte z/OS page) resulted in a 4% reduction in CPU utilization on top of the 5-10% CPU savings seen by DST for an insert heavy application process.
- CPU consumption by one program dropped 60% after a rebind in the DB2 10 environment -- that was a surprise, and an "it depends" result that can't necessarily be predicted.
- The restructured DB2 catalog (no links, universal tablespaces, row-level locking) greatly improved bind concurrency for DST.
- DST liked the new security options, including System DBADM, which provides most of the capabilities of the SYSADM authority and can be granted without data access .
- The new IFCID 401 trace record provides useful statement-level performance data for static SQL statements.
DB2 10 for z/OS technical overview. Jeff Josten, an IBM Distinguished Engineer and a long-time member of the DB2 for z/OS development organization, delivered this presentation. Some highlights:
- The use of z/OS 1 MB pages requires a z10 or a zEnterprise server.
- The DB2 10 Beta program was the largest ever for a new DB2 release.
- DB2 10 introduces High Performance DBATs (database access threads). Conceptually similar to CICS-DB2 protected entry threads, High Performance DBATs should be particularly beneficial for high-volume OLTP applications executed through DDF. They can be used by packages bound with RELEASE(DEALLOCATE) for improved CPU efficiency (formerly, only the RELEASE(COMMIT) bind option was honored for packages executed via DBATs). High Performance DBATs are released (for "clean-up" purposes) after every 200 uses.
- Some INSERT jobs could see a 30-40% reduction in CPU consumption in a DB2 10 environment.
- The movement of more thread-related storage above the 2 GB "bar" in the DB2 10 database services address space enables greater use of the RELEASE(DEALLOCATE) option of BIND (with DB2 10, only the working storage associated with a thread, and some of the thread/stack storage, goes below the 2 GB bar). This, in turn, can reduce the DB2 CPU time for an application process by 10-20%.
- DB2 10 provides a new option that allows data readers to access the previously committed version of a row, rather than waiting for an inserting or deleting process to commit.
- LRSN "spin avoidance," introduced with DB2 9 to reduce bottlenecking related to generation of log record sequence numbers in a data sharing environment, is further exploited in a DB2 10 data sharing group.
- The new hash access capability enables one-GETPAGE direct access to a data row based on the hash of a key value (versus the multiple GETPAGEs needed to navigate through an index tree). Hash access might not be a good choice when program performance depends on data clustering, as it randomizes data row placement in a table.
- DB2 10 uses 64-bit common storage to avoid constraints related to the use of below-the-bar extended common storage.
- Data access for large (greater than 3-4 TB) buffer pools is more efficient in a DB2 10 environment.
- More database schema changes can be effected online through ALTER (these become "pending alters" which are put into effect by an online REORG). Included among such alters are changes in page size, DSSIZE, and SEGSIZE, and a change in tablespace type to universal trablespace.
- DDF location alias names can also be changed online.
- Active log data sets can be dynamically added to a configuration via the DB2 command -SET LOG NEWLOG.
- The catalog and directory objects can be reorganized with SHRLEVEL(CHANGE).
- The new FORCE option of REORG can be used to cancel threads that are preventing the data set switch phase of online REORG from completing.
- The BACKOUT YES option of RECOVER can speed point-in-time object recovery if the recovery point is a little before one image copy and way after another.
- DBADM authority can be granted for all databases in a DB2 subsystem, removing the requirement that the authority be granted on a database-by-database basis.
- New stored procedures are supplied to enable automatic generation of statistics by the DB2 RUNSTATS utility
- Instead of having to specify one subsystem-wide value for the maximum number of active distributed threads (MAXDBAT), this limit can be specified in a more granular way (e.g., on an application basis).
- Timestamp accuracy can go to the picosecond level in a DB2 10 environment.
- XML schema validation is done "in the engine" in a DB2 10 system.
- New moving sum and moving average functions can enhance OLAP capabilities.
- Roger started out by calling DB2 10 "the hottest release we've ever had."
- Dil noted a major performance improvement (30%) seen when a DB2 stored procedure was converted from a C program to a native SQL procedure (the performance of SQL procedure language, aka SQLPL, is improved with DB2 10 -- realizing this improvement for an existing native SQL procedure will require a rebind of the associated DB2 package).
- Dil also reported that a major improvement in DB2 elapsed time (50%) was seen for a certain application that Morgan Stanley tested with DB2 10. Lock/latch suspend, page latch suspend, and log I/O suspend times for the application all decreased markedly in the DB2 10 environment.
- Morgan Stanley also saw a 15% improvement in elapsed time for concurrently executed LOAD REPLACE jobs in the DB2 10 system.
- Terrie mentioned that some high-volume insert programs could consume up to 40% less CPU in a DB2 10 environment versus DB2 9. Certain complex queries could consume 60% less CPU time with DB2 10.
- Roger recommended that people check out chapter 4 of the "Packages Revisited" IBM redbook for information on converting DBRMs bound directly into plans to packages (DB2 10 does not support DBRMs bound directly into plans).
- The DB2 10 catalog and directory objects must be SMS-controlled, with the extended addressability (EA) attribute (the tablespaces will be universal with a DSSIZE of 64 GB).
- Single-row result sets can be more efficiently retrieved with DB2 10, thanks to OPEN/FETCH/CLOSE chaining.
- With the improved performance of SQLPL, native SQL procedures could be even more CPU-efficient than COBOL stored procedures in a DB2 10 environment, and that's before factoring in the use of less-expensive zIIP engines by native SQL procedures called through DDF.
- DB2 10 provides support for "include" columns in an index key. These are columns that can be added to a unique key's columns (to get index-only access for more queries) without affecting the related unique constraint (i.e., the included key column values are not considered for key uniqueness). Organizations might be able to drop indexes that are redundant in light of this feature.
- Roger recommended that people get their DB2 objects into reordered row format (RRF).
- Roger urged people to take advantage of IBM's Consolidated Service Test program as part of their DB2 10 migration plan.
Sunday, October 17, 2010
When do you Add a New DB2 Buffer Pool?
I've written a number of entries, here and in my old Catterall Consulting blog, on various topics related to DB2 buffer pools (I posted the most recent of these just a few weeks ago). Often, when I write about DB2 buffer pools (or talk about them in presentations or when working with DB2 users), I have sizing in mind -- this because 1) undersized buffer pools are the DB2 performance inhibitor that I most frequently encounter, and 2) many DB2 people haven't yet thought about leveraging 64-bit addressing and the large amounts of memory commonly available on DB2 servers. What I'm thinking about today is related to buffer pool sizing, but is more of a layout consideration -- namely, when should you think about adding a new pool to your configuration?
There is an interesting history behind this question. Way back in the 1980s, when DB2 for the mainframe was new and DB2 for Linux, UNIX, and Windows wasn't yet on the scene, I and other DB2 specialists regularly counseled users to "put everything in BP0" (speaking, obviously, of objects with 4K pages). You didn't have a lot of choices back then (besides BP0, there were BP1, BP2, and BP3 for data sets with 4K pages, and BP32K for objects with larger pages), and we thought early on that it was best to just let DB2 figure out how to best manage the caching of pages for all tablespaces of all types (and associated indexes). In the 1990s we got a DB2 release (I'm thinking that it might have been Version 4) that gave us a LOT more choices with respect to the buffer pool configuration: you could have (and it's the case today) up to FIFTY different pools for objects with 4K pages, and ten different buffer pools each for data sets with 8K, 16K, and 32K pages. Around that time, we DB2 specialists started to sing a different buffer pool configuration tune: reserve BP0 for the catalog and directory objects, and assign the other tablespaces and indexes in your database to several other buffer pools. Aside from the obvious need to assign objects with different page sizes to pools with corresponding buffer sizes, we felt that assigning different categories of objects to different pools provided advantages in terms of performance and monitoring. I'll elaborate on this as I go along.
By the late 1990s, what I'd call the basic buffer pool configuration became pretty well established. That configuration tends to look pretty much like this (I'm talking about 4K buffer pools here, but the same categories could apply to 8K, 16K, and 32K pools -- except that the catalog and directory have to go in BP0, and work file tablespaces can have 4K or 16K pages, but not 8K or 16K pages):
Another fairly common BP7 parameter specification difference is a higher value for the vertical and horizontal deferred write thresholds (VDWQT and DWQT). The default values for VDWQT and DWQT are 5 and 30, respectively. For BP7, they might both be set to 50 or even higher. Why? Because for the pools that hold the catalog and directory and user tablespaces and indexes, lower deferred write thresholds trigger more-frequent externalization of changed pages to disk, and that leads to faster restart times in the event of an abnormal DB2 subsystem termination (there will be fewer pending writes to re-create during the roll-forward phase of subsystem restart). For the work file tablespaces, we don't care about recovery in case of a DB2 failure -- they just hold temporary work files used for sorts and other query-processing operations; therefore, the BP7 deferred write thresholds just need to be low enough to prevent a thrashing situation. [Note: use caution in setting there thresholds above 50 -- if they are too high, you could have a situation in which the data manager critical threshold (DMTH) is hit, and that is really bad for performance (in one situation, I saw a BP7 for which VDWQT and DWQT were both above 90, and DMTH was hit frequently for that pool). If you go above 50 for VDWQT and DWQT for your BP7, ensure that DMTH is not being hit -- you can check this out using your DB2 monitor or the output of the DB2 command -DISPLAY BUFFERPOOL (BP7) DETAIL(*).]
Next on the list of specifications that might differ from one pool to another is the PGFIX option. PGFIX is the parameter that indicates whether or not z/OS will fix a pool's buffers in memory (I blogged on buffer pool page-fixing a few months ago). Because page-fixing tends to deliver the greatest benefit for pools that have the highest rates of disk I/O operations, I tend to see PGFIX(YES) specified for pools that hold user tablespaces and indexes (BP0 and BP7 usually have the default specification of PGFIX(NO)).
The basic buffer pool configuration I've just described has been extended at many DB2 sites to include additional pools beyond BP1 and BP2 for user tablespaces and indexes. Why might you allocate an additional pool or pools for user objects? A couple of scenarios come immediately to my mind:
As you consider adding more pools to your DB2 buffer pool configuration, keep these primary advantages in mind: 1) you can better fine-tune the configuration by setting buffer pool parameters in a way that boosts performance and efficiency for accesses to certain classes of database objects, and 2) you can get more fine-grained buffer pool monitoring data, enabling you to allocate additional buffer resources where they'll do the most good. This has worked well at a lot of DB2 sites. It could be a winning approach for yours.
There is an interesting history behind this question. Way back in the 1980s, when DB2 for the mainframe was new and DB2 for Linux, UNIX, and Windows wasn't yet on the scene, I and other DB2 specialists regularly counseled users to "put everything in BP0" (speaking, obviously, of objects with 4K pages). You didn't have a lot of choices back then (besides BP0, there were BP1, BP2, and BP3 for data sets with 4K pages, and BP32K for objects with larger pages), and we thought early on that it was best to just let DB2 figure out how to best manage the caching of pages for all tablespaces of all types (and associated indexes). In the 1990s we got a DB2 release (I'm thinking that it might have been Version 4) that gave us a LOT more choices with respect to the buffer pool configuration: you could have (and it's the case today) up to FIFTY different pools for objects with 4K pages, and ten different buffer pools each for data sets with 8K, 16K, and 32K pages. Around that time, we DB2 specialists started to sing a different buffer pool configuration tune: reserve BP0 for the catalog and directory objects, and assign the other tablespaces and indexes in your database to several other buffer pools. Aside from the obvious need to assign objects with different page sizes to pools with corresponding buffer sizes, we felt that assigning different categories of objects to different pools provided advantages in terms of performance and monitoring. I'll elaborate on this as I go along.
By the late 1990s, what I'd call the basic buffer pool configuration became pretty well established. That configuration tends to look pretty much like this (I'm talking about 4K buffer pools here, but the same categories could apply to 8K, 16K, and 32K pools -- except that the catalog and directory have to go in BP0, and work file tablespaces can have 4K or 16K pages, but not 8K or 16K pages):
- Catalog and directory: BP0
- User tablespaces: BP1 (doesn't have to be BP1 -- could be any 4K pool other than BP0)
- Indexes on user tablespaces: BP2 (again, doesn't have to be BP2 -- just something other than BP0 and the pool to which user tablespaces are assigned).
- Work file tablespaces: BP7 (doesn't have to be BP7, but that's a popular choice because the name of the work file database in a non-data sharing environment is DSNDBB07).
Another fairly common BP7 parameter specification difference is a higher value for the vertical and horizontal deferred write thresholds (VDWQT and DWQT). The default values for VDWQT and DWQT are 5 and 30, respectively. For BP7, they might both be set to 50 or even higher. Why? Because for the pools that hold the catalog and directory and user tablespaces and indexes, lower deferred write thresholds trigger more-frequent externalization of changed pages to disk, and that leads to faster restart times in the event of an abnormal DB2 subsystem termination (there will be fewer pending writes to re-create during the roll-forward phase of subsystem restart). For the work file tablespaces, we don't care about recovery in case of a DB2 failure -- they just hold temporary work files used for sorts and other query-processing operations; therefore, the BP7 deferred write thresholds just need to be low enough to prevent a thrashing situation. [Note: use caution in setting there thresholds above 50 -- if they are too high, you could have a situation in which the data manager critical threshold (DMTH) is hit, and that is really bad for performance (in one situation, I saw a BP7 for which VDWQT and DWQT were both above 90, and DMTH was hit frequently for that pool). If you go above 50 for VDWQT and DWQT for your BP7, ensure that DMTH is not being hit -- you can check this out using your DB2 monitor or the output of the DB2 command -DISPLAY BUFFERPOOL (BP7) DETAIL(*).]
Next on the list of specifications that might differ from one pool to another is the PGFIX option. PGFIX is the parameter that indicates whether or not z/OS will fix a pool's buffers in memory (I blogged on buffer pool page-fixing a few months ago). Because page-fixing tends to deliver the greatest benefit for pools that have the highest rates of disk I/O operations, I tend to see PGFIX(YES) specified for pools that hold user tablespaces and indexes (BP0 and BP7 usually have the default specification of PGFIX(NO)).
The basic buffer pool configuration I've just described has been extended at many DB2 sites to include additional pools beyond BP1 and BP2 for user tablespaces and indexes. Why might you allocate an additional pool or pools for user objects? A couple of scenarios come immediately to my mind:
- As part of a plan for growing the overall size of the buffer pool configuration. Suppose you have the basic configuration, with user tablespaces and associated indexes assigned to buffer pools BP1 and BP2 (or whatever). You check the total read I/O rate for these pools and find that it's on the high side -- over 1000 per second, let's say (see my blog entry on buffer pool sizing, posted a few weeks ago, for information on performing this read I/O rate analysis). You make the pools considerably larger -- maybe doubling them in size, to well over 100,000 buffers apiece -- and still the read I/O rate is higher than you'd like it to be. At this point, you could just keep growing these pools (assuming adequate real storage to back the larger pools), but you could also consider the option of enlarging the overall buffer pool configuration by creating a new pool for some of your tablespaces (and another new one for the indexes defined on tables in those tablespaces). Which objects might you reassign from BP1 to BP3 (if that's your choice for the new pool for tablespaces)? One alternative would be to reassign tablespaces of a certain category -- those holding "history-type" tables, for example, so as to separate these from "current-activity" tables from a page-caching perspective. Another possibility would be a reassignment of some tablespaces based on their I/O counts (some DB2 monitoring tools provide this information, and it's also available via the DB2 command -DISPLAY BUFFERPOOL with the LSTATS option specified). In any case, I'm talking about net new buffers in the new pool -- not a reduction in BP1 buffers to offset the buffers allocated to the new pool. How big might the new pool be? For starters, I might think in terms of BP3 being half the size of BP1, with the possibility of getting larger still based on observed I/O rates. What I've mentioned for BP3 applies also to BP4 (or whatever you decide to use for the indexes associated with the tablespaces reassigned from BP1 to BP3). When you've reassigned the tablespaces (and indexes), monitor I/O activity for the new and "legacy" pools, and grow them (or leave them alone) based on what you see.
- To create a pool or pools to be used for "pinning" some objects in memory. This is a specialized case of overall buffer pool configuration size enlargement (again, I'm talking about new pools having net new buffers -- assuming, as always, that you have enough server memory to back the new buffers). I blogged about pinning objects in pools a couple of years ago. The concept is simple: you assign an object or objects (tablespaces and/or indexes) to a pool (or pools) that has enough buffers to hold all of the pages of the objects assigned to the pool. Once the objects are in the pool, there is no more disk read I/O activity associated with the pool. Generally speaking, the best candidates for pinning are objects that are a) relatively small (maybe a few thousand pages or less) and b) accessed very frequently. If you do decide to have a pool (or pools) for object-pinning, consider specifying PGSTEAL(FIFO) for the pool, instead of going with the default of PGSTEAL(LRU). This has the effect of making the buffer-stealing algorithm for the pool first in / first out (FIFO) instead of least-recently-used (LRU). The rationale: FIFO, being simpler, is more CPU-efficient than LRU. It's less sophisticated, but with no need for buffer stealing (remember, the pool has enough buffers to hold all pages of all objects assigned to the pool), why not go with the cheapest page stealing algorithm available?
As you consider adding more pools to your DB2 buffer pool configuration, keep these primary advantages in mind: 1) you can better fine-tune the configuration by setting buffer pool parameters in a way that boosts performance and efficiency for accesses to certain classes of database objects, and 2) you can get more fine-grained buffer pool monitoring data, enabling you to allocate additional buffer resources where they'll do the most good. This has worked well at a lot of DB2 sites. It could be a winning approach for yours.
Friday, October 1, 2010
Better DB2 for z/OS Query Performance through Statistics
It's been longer than I like since my last post -- a reflection of a particularly busy schedule of late. Today I'd like to focus on catalog statistics as a means of improving query performance in a DB2 for z/OS environment.
When a DB2 query doesn't perform as you want it to (response time and/or CPU time is too high), your first move is often, I'm sure, a review of the query's access path information -- something you'd get either from EXPLAIN data in the PLAN_TABLE, or through a graphical display of the query's access plan (among the IBM tools that can generate such a display are Optimization Service Center, Data Studio, and Optim Query Tuner). Analysis of this information could lead you to think of two common remedial actions: add a new index (or add a column or columns to an existing index) on a table targeted by the query; or, modify the query text in order to (for example) make a stage 2, non-indexable predicate stage 1 and indexable. These steps do often yield very positive results, but sometimes neither one is feasible.
Take the add (or alter) index option. Indexes are never free -- they take up disk space (though less with the index compression capability of DB2 9), and they make row insert and row delete operations more CPU-costly (they also increase the cost of updates when those updates change the value of indexed columns). Sometimes, the query performance benefits offered by a new index are seen as outweighing the index's costs. In other cases, creation of a new index on a table (or even adding a column to an existing index) has little chance of approval. This could be the case if there are already a lot of indexes defined on the table in question (I've seen upwards of 20 on a single table). Even if a table has only a few indexes, the idea of adding another might get a lot of push-back if performance-critical, data-changing batch jobs or online transactions that access the table can't be made any more expensive, for fear of missing SLA targets (sometimes an issue for batch workloads) or bottlenecking throughput (a potential concern for a high-volume transactional application).
If adding an index to a table is not do-able, what about modifying the query statement text to get a better-performing access path? That, too, can be a choice that's not available to you. Why? Two words: query tool. If the poorly performing query gets generated by a query tool on a user's PC, you may have to deal with the SQL statement as-is. Even if the query is in an application program that was developed in-house at your organization, changing the statement text may be more easily said than done. The developer who could make the change might be swamped with other work, and/or you might need to get a change request approved and that could take a good bit of time (depending on what else is on development's plate).
When a new index is unlikely and statement modification is not possible, are you at a dead end in terms of improving a query's performance? NO. Instead of giving up, try the catalog statistics route. A little clarification here: I'm not talking so much about making sure that statistics for the tables referenced by the query (and indexes on these tables) are accurate -- keeping those current through regular execution of the RUNSTATS utility is DB2 101. What I'm referring to is the richness of the statistics in your DB2 catalog. See, those statistics are the primary input to the DB2 optimizer when it is generating an access plan for an SQL statement. With more comprehensive information, the optimizer is more likely to choose the access path for a query that actually IS the best-performing path, versus one it THOUGHT was lowest-cost but which turned out to be high-cost.
So, what constitutes "more comprehensive" statistics? What I have in mind are value distribution statistics for columns that have skewed duplicate values. Here's what I mean by that: with just basic catalog statistics in place for a table (more on this momentarily), DB2 has two items of statistical information pertaining to the values stored in a column of the table: the column cardinality (i.e., the number of distinct data values in the column) and the number of rows in the table. For a unique column, cardinality will be equal to the number of rows in the table. For a non-unique column, cardinality will be less than the number of rows in the table, because duplicate values of the column will appear in some rows. Absent additional statistical information about the column, DB2 will assume that duplicate column values are evenly spread across the table's rows. For example, suppose that table ABC has 1 million rows, and column XYZ of that table has a cardinality of 1000. If that's all the statistical information that DB2 has for the column, it will assume that the 1000 distinct values of column XYZ are evenly spread across the 1 million rows of table ABC. In other words DB2 will assume that each of the 1000 values of column XYZ appears in 1000 of table ABC's rows. What if the spread of values in column XYZ is very much skewed? What if one value of the column appears in 900,000 of the 1 million rows in table ABC? And suppose that the next-most-frequently occurring value of column XYZ appears in 90,000 of the remaining 100,000 rows in table ABC? That disconnect between DB2's assumption (even spread of duplicate column values) and the reality of the situation (highly skewed column value distribution) can lead to poor-performing access path choices.
Fortunately, that "reality gap" can be filled via a couple of options that can be specified on a DB2 RUNSTATS utility control statement: FREQVAL and HISTOGRAM (the latter was introduced for mainframe DB2 via DB2 9 for z/OS -- I blogged about it last year when I was working as an independent DB2 consultant). With FREQVAL, you can tell the RUNSTATS utility to gather information about the top "n" most frequently occurring (or least frequently occurring, or both) values in a column (10 is a popular choice for "n" here, but you can go with another integer value if you'd like). This information (for each of the top "n" values, the value itself and the percentage of table rows containing that value in the specified column) is placed in the catalog table SYSIBM.SYSCOLDIST (or SYSCOLDISTSTATS in the case of a partitioned table). When the HISTOGRAM option is used, DB2 divides the columns values into "quantiles" (each a range of column values) and determines the number of unique column values within each quantile (the different quantiles cover about the same number of table rows). Note that while I've referred to individual columns, you can optionally have RUNSTATS generate FREQVAL or HISTOGRAM stats for a group of columns. In any case, with the value-frequency-distribution information gathered via FREQVAL, the DB2 query optimizer can much more accurately estimate the cost of candidate access paths for queries that have "equal" or range-type predicates (>, <, >=, <=) that reference non-unique columns. HISTOGRAM statistics are particularly useful for queries that have BETWEEN predicates that reference non-unique columns. More accurate access path cost estimation is the key to choosing the path that will actually deliver the best performance for a query.
OK, so armed with this information, what do you do? Should you have RUNSTATS generate FREQVAL and/or HISTOGRAM statistics for every non-unique column of every table in your database (and all the combinations thereof, while you're at it)? Sure, if your company gets mainframe cycles for free. A better approach is to get a good baseline of statistics in the catalog, and then to enrich them further on an as-needed basis. In my opinion, a good "base" RUNSTATS control statement for a RUNSTATS TABLESPACE utility would include these options:
TABLE(ALL) INDEX(ALL) KEYCARD
Note that this will get you, by default, "top 10" FREQVAL stats for the first key column of each index on every table in the tablespace.
So, you have your base of statistics (which you keep up-to-date, right?) in the catalog, and you have a query for which DB2 seems to have chosen the wrong access path. Now what? Well, if you're interested in productivity, take that query, plug it into the Optimization Service Center for DB2 for z/OS (free and downloadable) or Data Studio or Optim Query Tuner, and run the Statistics Advisor function. You'll get recommended RUNSTATS control statement specifications for the query, and these could well include the FREQVAL or HISTOGRAM options. Run RUNSTATS as recommended by the Statistics Advisor (hint: the recommendations labeled as "high priority" tend to give you the biggest bang for the buck), re-run the query (rebind the query-issuing program first, if it's a static SQL statement), and see if you get the better-performing access path that you think should have been chosen by DB2 in the first place. Armed with richer statistics, the optimizer can be expected to make better access path choices.
I took the approach described in the preceding paragraph a few months ago when I was helping an organization with some DB2 query tuning work: we had a query that was running long and chewing up a lot of CPU time, and our analysis of the access plan chosen for the query (we used Optimization Service Center for this purpose) showed that DB2 was significantly under-estimating the number of result set rows that would be qualified by a certain predicate (OSC and the other tools I've mentioned are great for this purpose -- they show you DB2's estimates of result set row filtering at each step of a query's access path). This, in turn, appeared to cause the optimizer to choose what turned out to be a high-cost access path for the query. We ran the Statistics Advisor for the query, and saw that the recommended RUNSTATS control statement included the FREQVAL option for some of the columns referenced in the query's predicates. We executed RUNSTATS as recommended, reran the query (it was a dynamic SQL statement), and saw that elapsed time decreased by about 50% (we did in fact get a different access path). That was great: a major query performance enhancement, without a new index and without query statement text modification. We just gave DB2 more information about the data in the target table, and the optimizer took it from there. Give this approach a try sometime, and write your own success story.
When a DB2 query doesn't perform as you want it to (response time and/or CPU time is too high), your first move is often, I'm sure, a review of the query's access path information -- something you'd get either from EXPLAIN data in the PLAN_TABLE, or through a graphical display of the query's access plan (among the IBM tools that can generate such a display are Optimization Service Center, Data Studio, and Optim Query Tuner). Analysis of this information could lead you to think of two common remedial actions: add a new index (or add a column or columns to an existing index) on a table targeted by the query; or, modify the query text in order to (for example) make a stage 2, non-indexable predicate stage 1 and indexable. These steps do often yield very positive results, but sometimes neither one is feasible.
Take the add (or alter) index option. Indexes are never free -- they take up disk space (though less with the index compression capability of DB2 9), and they make row insert and row delete operations more CPU-costly (they also increase the cost of updates when those updates change the value of indexed columns). Sometimes, the query performance benefits offered by a new index are seen as outweighing the index's costs. In other cases, creation of a new index on a table (or even adding a column to an existing index) has little chance of approval. This could be the case if there are already a lot of indexes defined on the table in question (I've seen upwards of 20 on a single table). Even if a table has only a few indexes, the idea of adding another might get a lot of push-back if performance-critical, data-changing batch jobs or online transactions that access the table can't be made any more expensive, for fear of missing SLA targets (sometimes an issue for batch workloads) or bottlenecking throughput (a potential concern for a high-volume transactional application).
If adding an index to a table is not do-able, what about modifying the query statement text to get a better-performing access path? That, too, can be a choice that's not available to you. Why? Two words: query tool. If the poorly performing query gets generated by a query tool on a user's PC, you may have to deal with the SQL statement as-is. Even if the query is in an application program that was developed in-house at your organization, changing the statement text may be more easily said than done. The developer who could make the change might be swamped with other work, and/or you might need to get a change request approved and that could take a good bit of time (depending on what else is on development's plate).
When a new index is unlikely and statement modification is not possible, are you at a dead end in terms of improving a query's performance? NO. Instead of giving up, try the catalog statistics route. A little clarification here: I'm not talking so much about making sure that statistics for the tables referenced by the query (and indexes on these tables) are accurate -- keeping those current through regular execution of the RUNSTATS utility is DB2 101. What I'm referring to is the richness of the statistics in your DB2 catalog. See, those statistics are the primary input to the DB2 optimizer when it is generating an access plan for an SQL statement. With more comprehensive information, the optimizer is more likely to choose the access path for a query that actually IS the best-performing path, versus one it THOUGHT was lowest-cost but which turned out to be high-cost.
So, what constitutes "more comprehensive" statistics? What I have in mind are value distribution statistics for columns that have skewed duplicate values. Here's what I mean by that: with just basic catalog statistics in place for a table (more on this momentarily), DB2 has two items of statistical information pertaining to the values stored in a column of the table: the column cardinality (i.e., the number of distinct data values in the column) and the number of rows in the table. For a unique column, cardinality will be equal to the number of rows in the table. For a non-unique column, cardinality will be less than the number of rows in the table, because duplicate values of the column will appear in some rows. Absent additional statistical information about the column, DB2 will assume that duplicate column values are evenly spread across the table's rows. For example, suppose that table ABC has 1 million rows, and column XYZ of that table has a cardinality of 1000. If that's all the statistical information that DB2 has for the column, it will assume that the 1000 distinct values of column XYZ are evenly spread across the 1 million rows of table ABC. In other words DB2 will assume that each of the 1000 values of column XYZ appears in 1000 of table ABC's rows. What if the spread of values in column XYZ is very much skewed? What if one value of the column appears in 900,000 of the 1 million rows in table ABC? And suppose that the next-most-frequently occurring value of column XYZ appears in 90,000 of the remaining 100,000 rows in table ABC? That disconnect between DB2's assumption (even spread of duplicate column values) and the reality of the situation (highly skewed column value distribution) can lead to poor-performing access path choices.
Fortunately, that "reality gap" can be filled via a couple of options that can be specified on a DB2 RUNSTATS utility control statement: FREQVAL and HISTOGRAM (the latter was introduced for mainframe DB2 via DB2 9 for z/OS -- I blogged about it last year when I was working as an independent DB2 consultant). With FREQVAL, you can tell the RUNSTATS utility to gather information about the top "n" most frequently occurring (or least frequently occurring, or both) values in a column (10 is a popular choice for "n" here, but you can go with another integer value if you'd like). This information (for each of the top "n" values, the value itself and the percentage of table rows containing that value in the specified column) is placed in the catalog table SYSIBM.SYSCOLDIST (or SYSCOLDISTSTATS in the case of a partitioned table). When the HISTOGRAM option is used, DB2 divides the columns values into "quantiles" (each a range of column values) and determines the number of unique column values within each quantile (the different quantiles cover about the same number of table rows). Note that while I've referred to individual columns, you can optionally have RUNSTATS generate FREQVAL or HISTOGRAM stats for a group of columns. In any case, with the value-frequency-distribution information gathered via FREQVAL, the DB2 query optimizer can much more accurately estimate the cost of candidate access paths for queries that have "equal" or range-type predicates (>, <, >=, <=) that reference non-unique columns. HISTOGRAM statistics are particularly useful for queries that have BETWEEN predicates that reference non-unique columns. More accurate access path cost estimation is the key to choosing the path that will actually deliver the best performance for a query.
OK, so armed with this information, what do you do? Should you have RUNSTATS generate FREQVAL and/or HISTOGRAM statistics for every non-unique column of every table in your database (and all the combinations thereof, while you're at it)? Sure, if your company gets mainframe cycles for free. A better approach is to get a good baseline of statistics in the catalog, and then to enrich them further on an as-needed basis. In my opinion, a good "base" RUNSTATS control statement for a RUNSTATS TABLESPACE utility would include these options:
TABLE(ALL) INDEX(ALL) KEYCARD
Note that this will get you, by default, "top 10" FREQVAL stats for the first key column of each index on every table in the tablespace.
So, you have your base of statistics (which you keep up-to-date, right?) in the catalog, and you have a query for which DB2 seems to have chosen the wrong access path. Now what? Well, if you're interested in productivity, take that query, plug it into the Optimization Service Center for DB2 for z/OS (free and downloadable) or Data Studio or Optim Query Tuner, and run the Statistics Advisor function. You'll get recommended RUNSTATS control statement specifications for the query, and these could well include the FREQVAL or HISTOGRAM options. Run RUNSTATS as recommended by the Statistics Advisor (hint: the recommendations labeled as "high priority" tend to give you the biggest bang for the buck), re-run the query (rebind the query-issuing program first, if it's a static SQL statement), and see if you get the better-performing access path that you think should have been chosen by DB2 in the first place. Armed with richer statistics, the optimizer can be expected to make better access path choices.
I took the approach described in the preceding paragraph a few months ago when I was helping an organization with some DB2 query tuning work: we had a query that was running long and chewing up a lot of CPU time, and our analysis of the access plan chosen for the query (we used Optimization Service Center for this purpose) showed that DB2 was significantly under-estimating the number of result set rows that would be qualified by a certain predicate (OSC and the other tools I've mentioned are great for this purpose -- they show you DB2's estimates of result set row filtering at each step of a query's access path). This, in turn, appeared to cause the optimizer to choose what turned out to be a high-cost access path for the query. We ran the Statistics Advisor for the query, and saw that the recommended RUNSTATS control statement included the FREQVAL option for some of the columns referenced in the query's predicates. We executed RUNSTATS as recommended, reran the query (it was a dynamic SQL statement), and saw that elapsed time decreased by about 50% (we did in fact get a different access path). That was great: a major query performance enhancement, without a new index and without query statement text modification. We just gave DB2 more information about the data in the target table, and the optimizer took it from there. Give this approach a try sometime, and write your own success story.