There are a LOT of people here. The conference got over 10,000 registrants this year. The arena-like events center in which this morning's grand opening session was held was pretty well packed. Fajitas for 10,000 (excellent guacamole) was an impressive operation.
A lot of information was packed into a fast-paced, 90-minute opening session. The emcee kicked things off with a nugget from a pre-conference survey: respondents indicated that the number one business challenge facing their organization is the need to cut costs and increase profits. I see the "cut costs" imperative as a lingering effect of the Great Recession out of which we're (thankfully) climbing. The "increase profits" goal is, I hope, indicative of a return to top-line growth as a focus of senior managements' attention -- companies are shifting, I think, from "survive" mode to a "drive to thrive." An overarching theme of this year's IOD event is the leveraging of information and analytics to "gain insight and optimize results."
Robert LeBlanc, IBM's Senior VP of Middleware Software, led most of the remainder of the opening session. Robert noted that a recent IBM survey of CEOs around the world uncovered three primary focus areas: 1) creative leadership, 2) reinvention of customer relationships, and 3) building operational dexterity. Taking advantage of data assets can lead to success in these areas, but first an organization has to get its data house in order, and that can be a tall task in light of the ongoing -- and accelerating -- information explosion: LeBlanc mentioned that in 2009, there were about 800,000 petabytes of information in computer systems around the world (80% of it unstructured). It's expected that by 2020 that number will be around 35 zettabytes (a zettabyte is 1 sextillion bytes, or a billion terabytes). That's a 44X increase in 11 years. Is the growing pile of data in organizations' systems effectively managed today? In many cases, the answer to that question is "no": 72% of senior executives indicated through a survey that they can get information more easily from the Internet than from their own applications.
Analytics, said LeBlanc, is all about using information. Information management, the foundation for successful analytics, has much to do with data governance (about which I wrote in a recent article published in IBM Data Management Magazine) and the establishment of a "single version of the truth" with regard to questions that might be answered based on the organization's data assets. Put the two together, and you can get some impressive business results:
- Avis Europe realized a 50% decrease in marketing costs through better targeting.
- The State of New York has avoided $1.2 billion in questionable tax refunds since 2004 through the use of analytics.
- Newell Rubbermaid got business intelligence queries to run 30 to 50 times faster through reengineering of their decision support system.
Next up was Arvind Krishna, IBM's General Manager of Information Management Software. Arvind spoke of how organizations are using modern information management technology to:
- Reduce customer "churn" through enhanced customer understanding.
- Slash loan origination times through collaborative decisioning.
- Achieve tremendous ROI through improved supply chain visibility.
The opening session concluded with a panel discussion involving several IBM software executives, Arvind among them. Topics covered by the panel included:
- The challenge posed by the information explosion on the one hand, and the need for a single version of the truth on the other (successfully dealing with this challenge often involves analysis of an organization's "information supply chain").
- Workload-optimized systems (integrated hardware/software offerings that are optimized for a specific workload), which are delivering both improved performance and greater ease-of-use for a variety of companies.
- The need to get started on building an analytics capability ("it's not a spectator sport"), with the right choice for a pilot being something that matters to the business.
- The usefulness of an information agenda, the "external artifact" that can bring the IT and business parts of an organization together ("every organization has an IT strategy that is really an application agenda -- what's the information agenda?").
DB2 for z/OS trends and directions. Rick Bowers, IBM Director of DB2 for z/OS Development, led this session. He reminded attendees of the recent announcement of DB2 10 for z/OS (October 19), and the very soon thereafter general availability date for the product (IBM started taking orders for DB2 10 on October 22). Interest in this new release of DB2 is very high in the user community, driven largely by the 5-10% CPU cost reduction delivered by DB2 10 "right out of the box" (and requiring only a rebind of existing DB2 packages). Improved scalability is another key DB2 10 benefit, with the expectation being that organizations will generally see a five- to ten-times increase in the number of users that can be concurrently connected to, and active on, a DB2 10 subsystem (it was mentioned that the number of concurrent DB2-accessing SAP application users would likely be five- to six-times greater on a DB2 10 subsystem versus previous product releases).
Rick talked about the "skip-release" migration path that would enable an organization running DB2 for z/OS V8 to go directly to DB2 10, without having to migrate to DB2 9 in-between. He said that skip-release migration had been successfully tested during the DB2 10 Beta program, and he suggested that the decision by a DB2 V8-using organization as to whether or not to go this route would depend on the organization's risk profile. "Early adopter" companies might opt to go straight from DB2 V8 to DB2 10, while late adopters would more likely migrate first to DB2 9 and then later to DB2 10.
Rick pointed out that 22 of the 25 DB2 10 Beta customers are planning to migrate DB2 10 to production (or at least to get the production migration process underway) during the 2011 calendar year.
Paul Gatan, Director of Systems Support at DST Systems, an early evaluator of DB2 10, joined Rick to talk about his company's DB2 10 experiences. DST has a large and very active (particularly from a data-change perspective) DB2 for z/OS database. One of the tables in that database has 61 billion rows and holds 9.5 terabytes of data. Paul highlighted the following in speaking of DST Systems' work with DB2 10:
- They really liked the "full" 64-bit addressing capability of DB2 10 (64-bit addressing, introduced for DB2 for z/OS with Version 8 of the product, was further exploited with DB2 9 and even more fully utilized with DB2 10).
- Use of the new z/OS 1 MB page size (DB2 page sizes remain as they were in previous releases -- DB2 10 maps multiple of its pages to one one-megabyte z/OS page) resulted in a 4% reduction in CPU utilization on top of the 5-10% CPU savings seen by DST for an insert heavy application process.
- CPU consumption by one program dropped 60% after a rebind in the DB2 10 environment -- that was a surprise, and an "it depends" result that can't necessarily be predicted.
- The restructured DB2 catalog (no links, universal tablespaces, row-level locking) greatly improved bind concurrency for DST.
- DST liked the new security options, including System DBADM, which provides most of the capabilities of the SYSADM authority and can be granted without data access .
- The new IFCID 401 trace record provides useful statement-level performance data for static SQL statements.
DB2 10 for z/OS technical overview. Jeff Josten, an IBM Distinguished Engineer and a long-time member of the DB2 for z/OS development organization, delivered this presentation. Some highlights:
- The use of z/OS 1 MB pages requires a z10 or a zEnterprise server.
- The DB2 10 Beta program was the largest ever for a new DB2 release.
- DB2 10 introduces High Performance DBATs (database access threads). Conceptually similar to CICS-DB2 protected entry threads, High Performance DBATs should be particularly beneficial for high-volume OLTP applications executed through DDF. They can be used by packages bound with RELEASE(DEALLOCATE) for improved CPU efficiency (formerly, only the RELEASE(COMMIT) bind option was honored for packages executed via DBATs). High Performance DBATs are released (for "clean-up" purposes) after every 200 uses.
- Some INSERT jobs could see a 30-40% reduction in CPU consumption in a DB2 10 environment.
- The movement of more thread-related storage above the 2 GB "bar" in the DB2 10 database services address space enables greater use of the RELEASE(DEALLOCATE) option of BIND (with DB2 10, only the working storage associated with a thread, and some of the thread/stack storage, goes below the 2 GB bar). This, in turn, can reduce the DB2 CPU time for an application process by 10-20%.
- DB2 10 provides a new option that allows data readers to access the previously committed version of a row, rather than waiting for an inserting or deleting process to commit.
- LRSN "spin avoidance," introduced with DB2 9 to reduce bottlenecking related to generation of log record sequence numbers in a data sharing environment, is further exploited in a DB2 10 data sharing group.
- The new hash access capability enables one-GETPAGE direct access to a data row based on the hash of a key value (versus the multiple GETPAGEs needed to navigate through an index tree). Hash access might not be a good choice when program performance depends on data clustering, as it randomizes data row placement in a table.
- DB2 10 uses 64-bit common storage to avoid constraints related to the use of below-the-bar extended common storage.
- Data access for large (greater than 3-4 TB) buffer pools is more efficient in a DB2 10 environment.
- More database schema changes can be effected online through ALTER (these become "pending alters" which are put into effect by an online REORG). Included among such alters are changes in page size, DSSIZE, and SEGSIZE, and a change in tablespace type to universal trablespace.
- DDF location alias names can also be changed online.
- Active log data sets can be dynamically added to a configuration via the DB2 command -SET LOG NEWLOG.
- The catalog and directory objects can be reorganized with SHRLEVEL(CHANGE).
- The new FORCE option of REORG can be used to cancel threads that are preventing the data set switch phase of online REORG from completing.
- The BACKOUT YES option of RECOVER can speed point-in-time object recovery if the recovery point is a little before one image copy and way after another.
- DBADM authority can be granted for all databases in a DB2 subsystem, removing the requirement that the authority be granted on a database-by-database basis.
- New stored procedures are supplied to enable automatic generation of statistics by the DB2 RUNSTATS utility
- Instead of having to specify one subsystem-wide value for the maximum number of active distributed threads (MAXDBAT), this limit can be specified in a more granular way (e.g., on an application basis).
- Timestamp accuracy can go to the picosecond level in a DB2 10 environment.
- XML schema validation is done "in the engine" in a DB2 10 system.
- New moving sum and moving average functions can enhance OLAP capabilities.
- Roger started out by calling DB2 10 "the hottest release we've ever had."
- Dil noted a major performance improvement (30%) seen when a DB2 stored procedure was converted from a C program to a native SQL procedure (the performance of SQL procedure language, aka SQLPL, is improved with DB2 10 -- realizing this improvement for an existing native SQL procedure will require a rebind of the associated DB2 package).
- Dil also reported that a major improvement in DB2 elapsed time (50%) was seen for a certain application that Morgan Stanley tested with DB2 10. Lock/latch suspend, page latch suspend, and log I/O suspend times for the application all decreased markedly in the DB2 10 environment.
- Morgan Stanley also saw a 15% improvement in elapsed time for concurrently executed LOAD REPLACE jobs in the DB2 10 system.
- Terrie mentioned that some high-volume insert programs could consume up to 40% less CPU in a DB2 10 environment versus DB2 9. Certain complex queries could consume 60% less CPU time with DB2 10.
- Roger recommended that people check out chapter 4 of the "Packages Revisited" IBM redbook for information on converting DBRMs bound directly into plans to packages (DB2 10 does not support DBRMs bound directly into plans).
- The DB2 10 catalog and directory objects must be SMS-controlled, with the extended addressability (EA) attribute (the tablespaces will be universal with a DSSIZE of 64 GB).
- Single-row result sets can be more efficiently retrieved with DB2 10, thanks to OPEN/FETCH/CLOSE chaining.
- With the improved performance of SQLPL, native SQL procedures could be even more CPU-efficient than COBOL stored procedures in a DB2 10 environment, and that's before factoring in the use of less-expensive zIIP engines by native SQL procedures called through DDF.
- DB2 10 provides support for "include" columns in an index key. These are columns that can be added to a unique key's columns (to get index-only access for more queries) without affecting the related unique constraint (i.e., the included key column values are not considered for key uniqueness). Organizations might be able to drop indexes that are redundant in light of this feature.
- Roger recommended that people get their DB2 objects into reordered row format (RRF).
- Roger urged people to take advantage of IBM's Consolidated Service Test program as part of their DB2 10 migration plan.
Nice summaries Robert... it is nice to be able to see the highlights of sessions I missed.ReplyDelete
My pleasure, Craig - thanks for the good words.ReplyDelete
As Craig said it's nice to see the highlights of the sessions for us being abroad!
A lot of the same great information will be delivered at the IDUG EMEA conference in Vienna next month (I'll be giving there the mainframe DB2 data warehousing presentation that I gave here this morning, plus a presentation on DB2 for z/OS stored procedures).ReplyDelete