About 15 years ago, I posted to the blog I maintained while working as an independent consultant (prior to rejoining IBM in 2010) an entry in which I named GETPAGEs the most important factor with regard to the CPU consumption of a Db2 for z/OS application workload. GETPAGEs, which are requests by Db2 (typically on behalf of application or user processes) to access pages of table spaces or indexes, are indeed a prime driver of SQL statement execution cost. That being the case, people who work on tuning Db2 query performance often aim to reduce GETPAGE activity associated with executing the query. That work, in turn, tends to focus on a query's predicates (the clauses of the query that determine which rows will qualify for the query's result set). Can an index be added or modified to enable index-level (versus table-level) row filtering for a given predicate? Can a non-index-able predicate be rewritten to be index-able? Can a predicate be added to further refine the query's result set? And so on.
The thing is, the predicate focus of query tuning could lead one to believe that the same GETPAGE-reducing actions could yield similar results for any SQL statement containing predicates, regardless of whether the statement is a SELECT or, say, a DELETE. That is not the case, especially for DELETEs versus queries, and the difference basically boils down to one thing: indexes. The same indexes that reduce GETPAGE activity for a query can make GETPAGE counts stubbornly high for a DELETE statement, in spite of tuning actions related to the DELETE statement's predicates. A Db2 SQL programmer recently ran up against this reality. He asked me about it, and I think his situation could be instructive for others.
The SQL programmer (I'll refer to him as R - the first letter of his first name) was analyzing the performance of a row-deleting process that would remove a few hundred thousand rows from a table in a given execution. Thinking that reducing GETPAGE activity would lower the CPU cost of the process, and approaching the tuning effort for the process's DELETE statement as one would approach tuning the performance of a query (thinking about an analogous query that would have the same predicates as the DELETE statement of interest), R had a DBA create on the target table a new index with a key comprised of the five columns referenced in a series of ANDed "equals" predicates in the DELETE statement (in other words, the DELETE had the series of predicates WHERE C1 = ? AND C2 =? AND C3 = ? AND C4 = ? AND C5 = ?, and the new index had the key C1 | C2 | C3 | C4 | C5). That would make for a MATCHCOLS value of 5 for the DELETE statement, right (referring to a column in the Db2 access path-explaining PLAN_TABLE)? And, that should reduce GETPAGE activity for the DELETE by enabling a greater degree of row filtering at the index level, right? Well, not exactly, as things turned out: R was surprised to see that the new index had very little impact on the GETPAGE count for the DELETE statement. What was going on here?
The index-impact result that surprised R comes down to essentially one thing, that being the fundamentally different nature of DELETE versus SELECT statements. Yes, both can have predicates, but that doesn't mean that a DELETE with predicates can be thought of as a "query" - DELETEs and SELECTs are like apples and oranges when it comes to their execution characteristics. A SELECT returns values (specifically, rows of values), and that's it. A certain index might sharply reduce GETPAGE activity for a query by reducing the need for Db2 to examine table space pages in order to find qualifying rows (in fact, an index could virtually eliminate table space GETPAGEs for a query, if all columns referenced in the query - in the query's select-list and in its predicates - are part of the index's key). A DELETE, on the other hand, changes pages, many of which - and this is of critical importance - will be pages in index spaces (excepting the unusual, but not unheard-of, situation in which a table has no indexes).
Given this aspect of DELETE processing, not only will adding an index to a table potentially have little impact on GETPAGE activity for a DELETE statement - it might even increase GETPAGE activity for the DELETE. Think about it: for every row removed from a table by a DELETE statement, an entry has to be removed from each and every index defined on the table (yes, it's actually a "pseudo-delete" of index entries, with the relevant entries just marked for later physical deletion, but this still involves index GETPAGEs). Not only that, but Db2 very often can't just march through an index's leaf pages, deleting index entries as corresponding rows are deleted from the underlying table - not when a given index's keys have a way-different sequence relative to the sequence in which table rows are being deleted. Maybe, because of matches on predicate columns, a DELETE statement is guided to rows qualifying for deletion by index X, but index Y, on the same table, may have a key whose ordering is very different from that of index X's (i.e., the two indexes' key values correlate little, if at all, on a row-by-row basis). In that case, finding the entry in index Y to delete as part of deleting a table row could well require an index probe operation (i.e., a top-down traversal of index Y, from root page to leaf page). If that kind of thing is happening for several indexes on the table, the number of GETPAGEs for a DELETE statement could be several times larger than the number of rows deleted; and, that's not because the DELETE has a "bad" access path - it's because the statement is a DELETE and not a SELECT.
Bottom line: comparing GETPAGE counts between SELECT and DELETE statements is not useful or meaningful, even if the statements have identical predicates. If you have a row-delete process that is consuming more CPU than you'd like, what can you do about it? Here are a couple of possibilities:
- Reduce - don't increase - the number of indexes on the target table. As I've mentioned, it's a general truism that while indexes can boost query performance, they tend to increase CPU and elapsed time for DELETE statements. That being the case, a good way to boost DELETE performance is often to find and remove indexes on the target table that are not doing any good. I call this "index trimming," and I provided some information on that topic in an entry I posted to this blog some years ago (I'd ignore the part of that blog entry that deals with hash-organized tables - that Db2 feature was deprecated with function level 504 of Db2 12 for z/OS).
- Consider using the DISCARD option for online REORG. Particularly when the row-delete criterion (or criteria) is not particularly complex, and can be expressed in the form of a predicate or predicates referencing the target table, executing an online REORG of the table's table space with the DISCARD option can be an attractive way to efficiently remove a large number of rows from the table with minimal disruption of application access to the data (there will always be at least a brief - sometimes just a few seconds - period of no data access, when a REORG job enters the SWITCH phase near end-of-execution).