Following up on the entry I posted last week, here are some more items of information picked up in sessions I attended during the IDUG 2011 North American DB2 Tech Conference, put on by the International DB2 Users Group earlier this month in Anaheim, California (in Orange County, aka "the OC"):
IBM Distinguished Engineer John Campbell delivered a presentation on DB2 10 for z/OS migration and early user experiences that was (as usual where John's concerned) full of useful, actionable content. One of the first points made during the session had to do with hash access, a new (with DB2 10) way of organizing rows in a table that can provide, under the right circumstances, super-efficient access to data. John told attendees that the "sweet spot" for hash-organization of data -- the set of data access scenarios in which hash organization would be the right choice for a table -- is smaller than he'd originally anticipated. I was pleased to hear that note of caution, as I feel that some folks have gotten a little carried away with the notion of accessing data rows via a hash key. It's something best used in a targeted way, versus generally. The hash route could be good for tables for which reads well outnumber inserts, and for which read access is dominated by single-row retrieval using a unique search argument (that would be the table's hash key column, or column group).
John also talked about the increased use of real storage (i.e., server memory) that should be expected in a DB2 10 environment relative to prior-release DB2 systems. He mentioned that DB2 10 could require 10-30% more real storage "to stand still," and more than that if people get aggressive with memory-using features such as high-performance DBATs (about which I blogged in an entry posted last month). In a lot of environments, this shouldn't be a concern, as I've seen plenty of production z/OS systems with tens of gigabytes of memory and demand paging rates in the low single digits per second; however, if you're running with DB2 V8 or DB2 9 and you're kind of tight on memory in your production z/OS LPAR (and I'd say that you are if the demand paging rate is near or more than 10 per second), consider adding memory to that LPAR prior to migrating to DB2 10 (and John pointed out that the cost of memory was substantially reduced when IBM rolled out the z196 mainframes).
John mentioned that while performance improvements (referring to CPU time) in the range of 5-10% should be expected for applications' in-DB2 processing in a DB2 10 environment (assuming that packages are rebound on the DB2 10 system), some "skinny packages" (packages with very short-running SQL statements and/or that issue very frequent commits) bound with RELEASE(COMMIT) were seen to have somewhat worse CPU efficiency in a DB2 10 environment as compared to prior-release systems. John pointed to a recently available fix (the one for APAR PM31614) that addresses this situation.
DB2 10's support of much larger numbers of concurrently active threads got prominent mention in John's presentation. Some threads, he mentioned, use more virtual storage than others, but John indicated that he's pretty confident that DB2 10 sites should be able to support at least 2500 to 3000 concurrently active threads per subsystem.
DB2 users should expect that exploitation of new features will increasingly require the use of universal table spaces, which were introduced with DB2 9. John pointed out that universal table spaces are a prerequisite for inline LOBs (a potentially major performance boost for applications that read and/or insert mostly small LOBs), the "currently committed" locking behavior (whereby data readers don't wait for the release of X-locks held for inserts or deletes of qualifying rows), and the aforementioned hash organization of tables. Fortunately, DB2 10 provides a means of migrating existing simple, segmented, and "classic" partitioned table spaces to universal table spaces without the need for an unload/drop/re-create/re-load sequence (this is done by way of ALTER TABLESPACE followed by an online REORG).
John brought up a change in the behavior of the CHAR function in a DB2 10 environment, when input to the function is decimal data (among other things, values returned by the function are not padded to the left with zeros). He then informed attendees that the fix for APAR PM29124 will restore pre-DB2 10 behavior for the CHAR function operating on decimal data.
Near the end of his presentation, John talked about preparing for DB2 10 in a DB2 Connect sense. He mentioned that DB2 10 requires that DB2 clients (referring to DB2 Connect or the IBM Data Server Drivers) be at least at the V9.1 fix pack 1 level. Several new DB2 10 functions require that DB2 clients be at the V9.7 fix pack 3A level or higher.
IBM's Beth Hamel delivered a session on data warehousing and business intelligence on the mainframe DB2 platform. She pointed out that two of the trends that are driving growth in data warehousing on System z are consolidation (mainframe systems are very highly scalable) and the rise of "transactional analytics" (these high-volume, quick-running queries can run concurrently with more complex, longer-running BI tasks in a mainframe DB2 system thanks to z/OS's advanced Workload Manager).
Beth also noted that a very large amount of the source data that goes into data warehouses comes from mainframe systems, and she said that organizations are looking to get closer to this source data by locating their data warehouses on mainframes. Also boosting System z activity in the BI space is the fact that data warehouses are increasingly seen by businesses as being mission critical. That leads to a greater emphasis on availability -- long a bedrock strength of the mainframe platform (and that high availability story gets even better when several DB2 for z/OS systems function as a data sharing group on a Parallel Sysplex shared-data mainframe cluster).
In addition to delivering new BI-enabling technology for the mainframe platform, IBM has made moves in the areas of product pricing and packaging that can help organizations to get up and running with DB2-based data warehouses on System z in less time and at a lower cost versus a piece-by-piece implementation. Beth pointed to the InfoSphere Warehouse on System z offering, which provides cubing services to accelerate OLAP applications, SQL-based ETL functionality, and more, all managed by way of an easy-to-use interface. Beth told attendees that the cost of InfoSphere Warehouse on System z is way below that of the offering's individual components, were these to be acquired separately.
In wrapping up her presentation, Beth talked about Version 2 of the IBM Smart Analytics Optimizer, a query accelerator that attaches to a mainframe DB2 system and can deliver eye-popping performance for formerly long-running data retrieval tasks. ISAO V2 will take advantage of Netezza technology (Netezza was acquired by IBM last year) to expand the range of queries that can be processed by the ISAO and to significantly boost the system's data capacity. Beth said that she expects the beta program for ISAO V2 to begin in the third quarter of this year.
That's it for this collection of IDUG nuggets. I'll wrap up this series of posts with a Part 3 entry in a few days.
No comments:
Post a Comment