First, the backdrop: suppose you have a need to remove a large number of rows from a table, using SQL (I say, "using SQL," because a utility-based alternative, REORG with DISCARD, is not always feasible or desirable). How might you do that? Well, if the requirement is to remove all rows from table T1 that have a value greater than 100 in column C1, you could simply execute this statement:
DELETE FROM T1 WHERE C1 > 100;
Done, right? No so fast. How many rows are qualified by the C1 > 100 predicate? Suppose T1 has 500 million rows, and 50 million of those rows have a value greater than 100 in column C1? In that case, what happens when the DELETE statement shown above is executed? For one thing, the application process associated with the DELETE will potentially acquire a LOT of page or row locks (depending on whether LOCKSIZE PAGE or LOCKSIZE ROW is in effect for the table space in which T1 resides). That, in turn, could lead to the deleting program getting an error SQL code if the limit on the number of locks that can be acquired by one process (NUMLKUS in ZPARM) is reached. If that limit is not reached because lock escalation occurs (as it might, depending on the NUMLKTS value in ZPARM and the LOCKMAX setting in effect for T1's table space), other programs will not be able to access T1 until the DELETE statement completes and a COMMIT is issued by the deleting program. If the value of NUMLKUS in ZPARM is high enough so as to not be hit when the DELETE executes (or if NUMLKUS is set to 0, meaning there is no limit on the number of locks that can be held at one time by one process), and if lock escalation does not occur (because NUMLKTS is set to 0 or to a very high number, or if the table space's LOCKMAX value is 0 or a very high number), IRLM could run short on storage because of all the lock control blocks (and, in a DB2 data sharing environment, the lock list portion of the lock structure could fill up) -- not good for system stability. Oh, and one more thing: suppose that 50-million-row-impacting DELETE statement gets through 49 million rows and then encounters a problem that causes it to fail. What are you looking at then? One BIG rollback operation.
OK, so removing lots and lots of rows from a table with a single DELETE statement can be problematic. Is there a SQL alternative? Yes, but the alternative available in a pre-DB2 12 environment is not one about which an application developer would be enthused. I'm talking here about a cursor-based DELETE approach. That would involve writing a program that declares a cursor through which the to-be-deleted rows would be identified (using the preceding example, row identification would be done via a SELECT from T1 with the WHERE C1 > 100 predicate), with the DECLARE CURSOR statement including the WITH HOLD option to maintain cursor position through commits (the FOR UPDATE clause might also be required for the DECLARE CURSOR statement). The program would then OPEN the cursor, FETCH a row, and perform a DELETE WHERE CURRENT OF operation. Then FETCH another row, do another DELETE WHERE CURRENT OF, and so on, with commits performed periodically so as to release acquired locks, until all to-be-removed rows have been deleted from the target (a variant of this approach involves deleting blocks of rows using a cursor declared WITH ROWSET POSITIONING). Do-able, yes, but kind of clunky from a programming perspective. Wouldn't it be nice if there were a SQL mechanism through which large-scale DELETEs could be accomplished in a way that would be programmer-friendly while at the same time avoiding the problems associated with massive accumulation of locks (or with lock escalation)? Well, thanks to DB2 12 for z/OS (at function level V12R1M500 or above), that mechanism is here. It's called piece-wise DELETE.
Using piece-wise DELETE, how easy is it to perform a large-scale row-removal operation that does not gum up the works, concurrency-wise? This easy (and here I'll assume that we want to remove rows from the target table in chunks of 500):
- Issue DELETE FROM T1 WHERE C1 > 100 FETCH FIRST 500 ROWS ONLY;
- Issue a COMMIT
- Repeat steps 1 and 2 until all to-be-removed rows have been deleted
What if you sometimes want to use your piece-wise DELETE program to remove rows in chunks of 50, or 1000, instead of 500? No prob. You can use a variable to specify the to-be-deleted quantity in the DELETE statement's FETCH FIRST clause. And, if you're using a variable to provide the FETCH FIRST value, you can change the row-removal "chunk" size as a large-scale DELETE operation is executing.
There you have it -- an easy-to-program means of breaking up a DELETE operation into bite-sized pieces (or bite-sized units of work, in more technical parlance). In blog posts to come I'll cover others of my favorite DB2 12 SQL enhancements. I hope you'll check those out.
Nice!!!
ReplyDeletepiece-wise updates?
ReplyDeleteNot yet. That nut is a little tougher to crack than is piece-wise DELETE. With piece-wise DELETE, if you are deleting a big set of rows in chunks of, say, 500, and you execute the piece-wise DELETE statement the first time, when you execute the statement the second time (following a commit), the first 500 of the to-be-deleted rows are gone. The second iterative execution of the piece-wise DELETE removes the next 500 rows, and so on - it's very straightforward.
DeleteIf you have an UPDATE with a predicate that qualifies, for example, 1 million rows, and you want to update them in chunks of 500, you could execute a piece-wise UPDATE (if that syntax were supported) the first time, and when a commit is issued and the piece-wise UPDATE is executed a second time, the 500 rows previously updated are still in the table, and presumably still qualified by the predicate that identifies the 1 million to-be-updated rows. How do you skip over those first 500 rows so that the next 500 can be updated? And then how do you skip over the first 1000 of the to-be-updated rows (because they've already been updated) when the piece-wise UPDATE is executed the third time?
I am confident that this will be solved, but it will require more engineering versus deleting a big set of rows in chunks.
Robert
I expected piece-wise UPDATE for system-period temporal tables.
ReplyDeleteSame challenge, as far as I'm concerned. System-time temporal adds a history table to the picture, but doesn't change behavior in the base table. Rows updated by the first iteration of a piece-wise UPDATE would still be in the base table, regardless of whether or not system-time temporal functionality is in effect. Yes, an updated row's "system begin time" value is changed, but that doesn't necessarily mean that the updated row would be bypassed (as desired) on a subsequent iteration of the piece-wise UPDATE.
DeleteRobert
Not only is this easier to use, it has performance benefits of roughly 50% over DELETE WHERE CURRENT OF CURSOR. Thanks, Robert!
ReplyDeleteKool Deal - Robert & IBM
ReplyDeleteSorry i am late to the party on this one, however, we always force our appl developers to unload ONLY candidate keys (i.e., from cluster ix). Then take that outfile as input to a keyed delete routine; doing frequent intermittent COMMITS!!
In today's simple(ton) coding world, that option is no longer tenable - Their answer to that one is, always:
It is too HARD, RICK!!
No longer the case, in today's 21st & 1/2 zDb2 Century!
Your pal,
Rick
;-]
Somehow I missed seeing your comment until just now, Rick. Certainly, making life easier for application developers is a key focus of the Db2 for z/OS development team - app developers are a very important constituency in the Db2 user community.
DeleteRobert