tag:blogger.com,1999:blog-4516533711330247058.post599984575607545864..comments2024-03-28T07:32:09.246-07:00Comments on Robert's Db2 blog: DB2 12 for z/OS SQL Enhancements: Piece-Wise DELETERoberthttp://www.blogger.com/profile/02058625981006623480noreply@blogger.comBlogger8125tag:blogger.com,1999:blog-4516533711330247058.post-34258357925116837632020-04-15T19:36:33.216-07:002020-04-15T19:36:33.216-07:00Somehow I missed seeing your comment until just no...Somehow I missed seeing your comment until just now, Rick. Certainly, making life easier for application developers is a key focus of the Db2 for z/OS development team - app developers are a very important constituency in the Db2 user community.<br /><br />RobertRoberthttps://www.blogger.com/profile/02058625981006623480noreply@blogger.comtag:blogger.com,1999:blog-4516533711330247058.post-33173546914557577382020-03-20T10:31:44.810-07:002020-03-20T10:31:44.810-07:00Kool Deal - Robert & IBM
Sorry i am late to ...Kool Deal - Robert & IBM <br /><br />Sorry i am late to the party on this one, however, we always force our appl developers to unload ONLY candidate keys (i.e., from cluster ix). Then take that outfile as input to a keyed delete routine; doing frequent intermittent COMMITS!!<br /><br />In today's simple(ton) coding world, that option is no longer tenable - Their answer to that one is, always:<br /><br />It is too HARD, RICK!!<br /><br />No longer the case, in today's 21st & 1/2 zDb2 Century!<br /><br />Your pal,<br />Rick<br /><br />;-]Rick Moleranoreply@blogger.comtag:blogger.com,1999:blog-4516533711330247058.post-78772140025207575612019-09-24T10:08:51.957-07:002019-09-24T10:08:51.957-07:00Not only is this easier to use, it has performance...Not only is this easier to use, it has performance benefits of roughly 50% over DELETE WHERE CURRENT OF CURSOR. Thanks, Robert!Neal N Lozinshttps://www.blogger.com/profile/16739375891351435568noreply@blogger.comtag:blogger.com,1999:blog-4516533711330247058.post-71288845054713811732017-07-19T19:04:10.454-07:002017-07-19T19:04:10.454-07:00Same challenge, as far as I'm concerned. Syste...Same challenge, as far as I'm concerned. System-time temporal adds a history table to the picture, but doesn't change behavior in the base table. Rows updated by the first iteration of a piece-wise UPDATE would still be in the base table, regardless of whether or not system-time temporal functionality is in effect. Yes, an updated row's "system begin time" value is changed, but that doesn't necessarily mean that the updated row would be bypassed (as desired) on a subsequent iteration of the piece-wise UPDATE.<br /><br />RobertRoberthttps://www.blogger.com/profile/02058625981006623480noreply@blogger.comtag:blogger.com,1999:blog-4516533711330247058.post-83065086851078042522017-07-18T07:01:10.622-07:002017-07-18T07:01:10.622-07:00I expected piece-wise UPDATE for system-period tem...I expected piece-wise UPDATE for system-period temporal tables.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-4516533711330247058.post-71678793652682184982017-07-16T19:45:29.077-07:002017-07-16T19:45:29.077-07:00Not yet. That nut is a little tougher to crack tha...Not yet. That nut is a little tougher to crack than is piece-wise DELETE. With piece-wise DELETE, if you are deleting a big set of rows in chunks of, say, 500, and you execute the piece-wise DELETE statement the first time, when you execute the statement the second time (following a commit), the first 500 of the to-be-deleted rows are gone. The second iterative execution of the piece-wise DELETE removes the next 500 rows, and so on - it's very straightforward.<br /><br />If you have an UPDATE with a predicate that qualifies, for example, 1 million rows, and you want to update them in chunks of 500, you could execute a piece-wise UPDATE (if that syntax were supported) the first time, and when a commit is issued and the piece-wise UPDATE is executed a second time, the 500 rows previously updated are still in the table, and presumably still qualified by the predicate that identifies the 1 million to-be-updated rows. How do you skip over those first 500 rows so that the next 500 can be updated? And then how do you skip over the first 1000 of the to-be-updated rows (because they've already been updated) when the piece-wise UPDATE is executed the third time?<br /><br />I am confident that this will be solved, but it will require more engineering versus deleting a big set of rows in chunks.<br /><br />RobertRoberthttps://www.blogger.com/profile/02058625981006623480noreply@blogger.comtag:blogger.com,1999:blog-4516533711330247058.post-42573116091041349362017-07-13T13:16:23.998-07:002017-07-13T13:16:23.998-07:00piece-wise updates?piece-wise updates?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-4516533711330247058.post-64252778606315036512017-07-09T05:42:13.334-07:002017-07-09T05:42:13.334-07:00Nice!!! Nice!!! Unknownhttps://www.blogger.com/profile/07789823511157207638noreply@blogger.com