tag:blogger.com,1999:blog-4516533711330247058.post4679619021235784600..comments2024-03-29T05:35:42.451-07:00Comments on Robert's Db2 blog: DB2 10 (and beyond) for z/OS: Being Smart About More RELEASE(DEALLOCATE)Roberthttp://www.blogger.com/profile/02058625981006623480noreply@blogger.comBlogger12125tag:blogger.com,1999:blog-4516533711330247058.post-24230737362624192632019-03-12T09:33:18.456-07:002019-03-12T09:33:18.456-07:00It is not an either/or matter. The CPU benefit of ...It is not an either/or matter. The CPU benefit of RELEASE(DEALLOCATE) is greatest (assuming a persistent Db2 thread, such as a batch job's thread) for a package that is 1) executed frequently during the time that a thread through which the package is executed exists and 2) has a relatively low in-Db2 CPU time. If the COMMIT-issuing routine meets those criteria, binding that routine's Db2 package with RELEASE(DEALLOCATE) would be a good move, performance-wise. The same would be true for a SQL-issuing program, other than the commit routine, associated with the batch job: if it is executed frequently and is in and out of Db2 quickly, RELEASE(DEALLOCATE) can be an appropriate BIND specification.<br /><br />Note that amount of SQL issued is not the point. If a program issues many SQL statements but is executed only once (i.e., within just one commit scope) by the batch job, RELEASE(DEALLOCATE) will not make a performance difference. Even if that program is executed frequently, if it has a relatively high in-Db2 CPU time per execution (e.g., more than a second), RELEASE(DEALLOCATE) may have little impact because the proportional cost of releasing (from the thread) and reacquiring (for the thread) the package sections and associated table space locks will be small.<br /><br />RobertRoberthttps://www.blogger.com/profile/02058625981006623480noreply@blogger.comtag:blogger.com,1999:blog-4516533711330247058.post-84556856383712276592019-03-09T22:50:08.614-08:002019-03-09T22:50:08.614-08:00Hello,
I'm trying to improve performance by us...Hello,<br />I'm trying to improve performance by using RELEASE(DEALLOCATE) in batch programs. We have a central routine for COMMIT / RESTART, which issues all COMMITs itself. If I want to do the RELEASE parameter change at the package level, which package should I target? the one doing the most SQL (main program for instance) or the one doing the COMMITs?<br />Thanks.Yaron Talhttps://www.blogger.com/profile/18250561281760162199noreply@blogger.comtag:blogger.com,1999:blog-4516533711330247058.post-32490566934926283642016-04-01T22:05:08.182-07:002016-04-01T22:05:08.182-07:00Sorry about the delay in responding. I have some c...Sorry about the delay in responding. I have some considerations pertaining to protected entry threads in this blog entry: http://robertsdb2blog.blogspot.com/2011/12/db2-for-zos-driving-thread-reuse.html<br /><br />Check that blog entry out, and in particular pay attention to the TCBLIMIT consideration.<br /><br />Of course you don't have to change all 70+ DB2ENTRY resources (if each of the transactions you mentioned has its own DB2ENTRY) to start using protected threads to drive thread reuse higher. I have to believe that some of these 70+ transactions are executed more frequently than others. Start with those that are executed most frequently. Also, keep in mind that you have to be on DB2 11, in NFM, I believe, before you get automatic RELEASE(DEALLOCATE) "break-in," described in this blog entry: http://robertsdb2blog.blogspot.com/2014/02/db2-11-for-zos-bind-concurrency-even.html<br /><br />Prior to DB2 11 NFM, you'll have to determine whether use of protected threads and RELEASE(DEALLOCATE) packages will get in the way of BIND/REBIND, ALTER, and/or online REORGs (the latter executed to materialize pending DDL changes). This won't be a problem if your transaction volume goes down enough at night or on weekends to allow the protected threads to be deallocated. If that doesn't happen - if transaction volume is such that the protected threads could be around all the time, RELEASE(DEALLOCATE) packages executed via those threads could impede some database administration tasks in a DB2 10 for z/OS system.<br /><br />RobertRoberthttps://www.blogger.com/profile/02058625981006623480noreply@blogger.comtag:blogger.com,1999:blog-4516533711330247058.post-81704510185126878542016-03-29T22:33:49.621-07:002016-03-29T22:33:49.621-07:00Hello Robert,
Very informative as always.
I'm ...Hello Robert,<br />Very informative as always.<br />I'm thinking of leveraging this at my shop. We're on DB2 10 NFM and CICS 4.2.<br />I've got a DB2 package that could be regarded as quite 'simple' - basically authenticates an application user with a single SQL against a User table. There are about 70+ transactions executing this package. This gets execute 2000+ times a minute - An ideal candidate for changing the RELEASE parm to DEALLOCATE, I would think.<br />Obviously to leverage full benefit of RELEASE-DEALLOCATE'ing, all the 70+ transactions will need to be altered for its THREADLIMIT and PROTECTNUM parms for their DB2E in all AORs.<br />Is there any known issues that I may consider from a CICS perspective before we make the CICS change?<br /><br />Thanks in advance. <br /><br />Cheers,<br />Vibin Vibin Varghesehttps://www.blogger.com/profile/09021689444643992708noreply@blogger.comtag:blogger.com,1999:blog-4516533711330247058.post-89001751369418304772016-02-07T20:01:59.015-08:002016-02-07T20:01:59.015-08:00Sorry about the delay in responding.
Packages bou...Sorry about the delay in responding.<br /><br />Packages bound with RELEASE(DEALLOCATE) and executed by way of persistent threads (meaning, threads that persist through commits, such as high-performance DBATs, CICS-DB2 protected entry threads, batch job threads, etc.) can block some DDL, bind/rebind, and utility operations. Here is why: a package cannot be invalidated or replaced while it is in-use, and a package bound with RELEASE(DEALLOCATE) and allocated to a persistent thread is considered to be continuously in-use until the thread is deallocated (there was a change here with DB2 11 - I'll cover that in a moment). A bind or rebind of the package will fail because the in-use package cannot be rebound or replaced. An ALTER that would invalidate the package will fail because the in-use package cannot be invalidated. An online REORG that would materialize a pending DDL change will not complete successfully because such a REORG has to invalidate packages dependent on the object being REORG'ed, and, as noted, an in-use package cannot be invalidated.<br /><br />An online REORG that is NOT executed to materialize a pending DDL change should not require invalidation pf packages, and thus should not be affected by RELEASE(DEALLOCATE) packages allocated to persistent threads such as high-performance DBATs. Similarly, image copy and RUNSTATS jobs should not be affected by RELEASE(DEALLOCATE) packages allocated to persistent threads, because image copy an RUNSTATS jobs should not require package invalidation. Even an operation that will result in rebinding or replacing or invalidating a package will not be affected by package ABC being bound with RELEASE(DEALLOCATE) and being allocated to a persistent thread, if the package affected by the bind/rebind, DDL, or pending DDL-materializing online REORG is package XYZ versus package ABC (and if package XYZ is not itself bound with RELEASE(DEALLOCATE) and allocated to a persistent thread.<br /><br />With DB2 11, if a bind/rebind or DDL or pending DDL-materializing online REORG operation would be blocked by a RELEASE(DEALLOCATE) package allocated to a LOCAL persistent thread (i.e., not a DBAT), DB2 will, automatically, temporarily change the package's behavior to RELEASE(COMMIT) and will drain use of the package so the use count will go to zero, and that will un-block the bind/rebind or DDL or pending DDL-materializing online REORG operation that had been held up. For high-performance DBATs, one still has to issue the -MODIFY DDF PKGREL(COMMIT) command in a DB2 11 environment if a bind/rebind or DDL or pending DDL-materializing online REORG would be blocked by a RELEASE(DEALLOCATE) package allocated to a high-performance DBAT, but the command should un-block things faster than would be the case in a DB2 10 system.<br /><br />Robert Roberthttps://www.blogger.com/profile/02058625981006623480noreply@blogger.comtag:blogger.com,1999:blog-4516533711330247058.post-507134149516901592016-02-05T12:33:27.127-08:002016-02-05T12:33:27.127-08:00Robert this is good stuff. We are in the process o...Robert this is good stuff. We are in the process of implementing High performance DBATs. IBM warns that PKGREL(COMMIT) be issued prior to running DDL and utilities specifically reorgs. We run share level change reorgs but still we will modify PKGREL(COMMIT) prior to our DDL and reorg window starts. Should we be concerned about running Runstats and image copies share level change with PKGREL(BINDOPT) and some packages bound with release(deallocate). Please advise. Thanks. Ranga. RJAYhttps://www.blogger.com/profile/01013612003993038150noreply@blogger.comtag:blogger.com,1999:blog-4516533711330247058.post-18163461344546057552014-08-15T19:02:40.934-07:002014-08-15T19:02:40.934-07:00That is correct. Access path selection does not de...That is correct. Access path selection does not depend on the RELEASE specification of a BIND or REBIND operation. Access path changes can occur when packages are rebound, and one reason for rebinding packages is to go from RELEASE(COMMIT) to RELEASE(DEALLOCATE). Roberthttps://www.blogger.com/profile/02058625981006623480noreply@blogger.comtag:blogger.com,1999:blog-4516533711330247058.post-48474225767296065492014-08-14T11:24:44.647-07:002014-08-14T11:24:44.647-07:00The change in access path will not change by the u...The change in access path will not change by the use of release commit vs release deallocate. DB2 will calculate a "bad" access path when the actual table data distribution is no longer matched to stats or not enough stats are collected to reflect that data. There are many possible sources -- is it a work table that should have the volatile attribute ? 3+ columns in an index without intermediate distributions? Skewed data? table now 5x size vs stats saved? Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-4516533711330247058.post-43894881573199706992014-07-25T19:46:32.070-07:002014-07-25T19:46:32.070-07:00If catalog statistics for an object or objects acc...If catalog statistics for an object or objects accessed by a transaction (e.g., a table, table space, index) are out of date or incomplete, it's possible that a rebind of the transaction's package will result in the selection by DB2 of an access path that turns out to be suboptimal from a performance perspective; so, keeping catalog statistics up-to-date can be a means of reducing these negative surprises.<br /><br />That said, you really can't guarantee that a package rebind won't result in access path changes that will negatively impact a transaction's performance. Such an occurrence is, as you've pointed out, unlikely but not impossible. For that reason, and because package rebinds are important for realizing various benefits delivered by a new version of DB2, the DB2 for z/OS development organization came up with some enhancements aimed at "taking the fear out of rebinding," as a colleague of mine put it. You should take advantage of these enhancements.<br /><br />Chief among these rebind anxiety-reducing features is what's called plan management. You state that "often after time and effort we are able to revert the package back to an older access path which performs better." Getting back to a better-performing access path should require very little time and effort on your part. By default, on a DB2 10 or 11 subsystem plan management EXTENDED is in effect (check the value of the PLANMGMT parameter in ZPARM). In that case, when you rebind a package DB2 will retain the previous instance of the package. If the rebind action causes an access path change that results in degraded performance, just issue a REBIND command for the package with the SWITCH(PREVIOUS) option, and you're right back to the prior, well-performing version of the package.<br /><br />Beyond plan management, there are new rebind options such as APCOMPARE (DB2 will let you know if a rebind causes an access path change) and APREUSE (DB2 will try to reuse existing access paths when rebinding a package) that you can utilize to mitigate risk of a performance problem caused by an access path change.<br /><br />So, rebind packages when you go to a new version of DB2, but have plan management EXTENDED (or at least BASIC) in effect so that you can do a quick REBIND with SWITCH for a package if a negative performance surprise pops up, and consider the use of options such as APCOMPARE or APREUSE to batter manage access path generation.<br /><br />Robert Roberthttps://www.blogger.com/profile/02058625981006623480noreply@blogger.comtag:blogger.com,1999:blog-4516533711330247058.post-60922400859031813292014-07-24T20:18:24.673-07:002014-07-24T20:18:24.673-07:00Very helpful - but I have one question
What factor...Very helpful - but I have one question<br />What factors influence why amoung the 100's of trx which see a small CPU performance improvement, we always have a dozen or so trx which increase 10x or 100x in CPU cost? What factors might contribute to this pattern. Our mgmt see these and they become increasingly concerned about upgrades, even when most of the trx get better. Often after time and effort we are able to revert the package back to an older access path which performs better. It would be helpful to understand what factors we might look for before the upgrade to avoid some of these - thanks<br />Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-4516533711330247058.post-65350659926153070612014-02-10T06:43:34.797-08:002014-02-10T06:43:34.797-08:00Thank you for the kind words, Duc.
RobertThank you for the kind words, Duc.<br /><br />RobertRoberthttps://www.blogger.com/profile/02058625981006623480noreply@blogger.comtag:blogger.com,1999:blog-4516533711330247058.post-3742555577133310032014-02-10T01:37:27.677-08:002014-02-10T01:37:27.677-08:00Very useful, short and clear details, as usual.
T...Very useful, short and clear details, as usual. <br />Thank you Robert, you are my favorite DB2 performance author<br />Duc.Anonymousnoreply@blogger.com