Twice in the past month, I've encountered a misunderstanding pertaining to dynamic SQL statements issued by applications that access DB2 for z/OS via network connections (these could also be called DDF-using applications, or DRDA requesters). Now seems as good a time as any to clear things up. I'll use this blog entry for that purpose.
The misunderstanding of which I speak: some people are under the impression that dynamic SQL statements issued by DDF-connected applications are zIIP-eligible when executed, while static SQL statements issued by DDF-connected applications are not zIIP-eligible.
This is not true. zIIP eligibility is not a dynamic or static SQL thing. It is a task thing (as described in a blog entry I wrote about 15 months ago). Here, "task" refers to the type of task in a z/OS system under which an SQL statement executes. zIIP-eligible work executes under a type of task called an enclave SRB (also referred to as a preemptible SRB); thus, when a SQL statement -- dynamic or static -- runs under an enclave SRB, it is zIIP-eligible. If it runs under a TCB, it is not zIIP-eligible. When does a SQL statement run under an enclave SRB in a z/OS system? Three scenarios come to mind:
When the SQL statement is issued by a DDF-connected application (i.e., by a DRDA requester). In that case, the SQL statement will run under an enclave SRB (again, that's a preemptible SRB) in the DB2 DDF address space. The statement's execution will be off-loadable -- as much as 60% so -- to a zIIP engine (if significantly less than 60% zIIP offload is seen for SQL statements issued by DDF-connected applications, the cause could well be zIIP engine contention in the LPAR). Note that static versus dynamic is a non-issue here -- either way, the statement runs under an enclave SRB and so is zIIP-eligible.
When the SQL statement is issued by a native SQL procedure that is called by a DDF-connected application. I underlined "native SQL procedure" because a SQL statement (again, static or dynamic) issued by an external stored procedure (such as an external SQL procedure, or a stored procedure written in COBOL or C or Java) will not be zIIP-eligible, regardless of whether it is called by a DDF-connected application or by a local-to-DB2 program (such as a CICS transaction or a batch job). A SQL statement issued by an external DB2 stored procedure will not be zIIP eligible because such a stored procedure always runs under a TCB in a stored procedure address space, no matter what type of application -- local to DB2, or remote -- issues the CALL to invoke the stored procedure. Conversely, a native SQL procedure always runs under the task of the calling application process. If that process is a DDF-connected application, the application's z/OS task will be, as pointed out above, an enclave SRB in the DDF address space. That being the case, a native SQL procedure called by a DDF-connected application will run under the DDF enclave SRB representing that application in the z/OS LPAR, and the SQL statements issued by the native SQL procedure (and all statements of that type of stored procedure are SQL statements -- it's a stored procedure written in SQL) will execute under that enclave SRB and so will be (as previously noted) up to 60% zIIP-eligible.
You might think, "Hey, isn't a Java stored procedure zIIP-eligible?" Yes, the Java part of that stored procedure program will be zIIP eligible, but SQL is not Java, and the SQL statements issued by a Java stored procedure will run under a TCB in a stored procedure address space and so will not be zIIP-eligible (actually, they might be a little zIIP-eligible, because the Java stored procedure might "hold on" to a zIIP processor for just a bit after a SQL statement issued by the stored procedure starts executing).
When the SQL statement is parallelized by DB2. A query can be a candidate for parallelization by DB2 if: it is static and the associated package was bound with DEGREE(ANY); it is dynamic and the value of the CURRENT DEGREE special register is 'ANY'; or there is a row for the query in the SYSIBM.SYSQUERY and SYSIBM.SYSQUERYOPTS catalog tables (introduced with DB2 10 to enable statement-level control over execution behaviors such as parallelization and degree of parallelization, for static and dynamic SQL). If a query is parallelized by DB2, the "pieces" of the split query will run under enclave SRBs and so will be zIIP-eligible (up to 80%).
And one more thing... Since DB2 10 for z/OS, prefetch read operations have been 100% zIIP-eligible; thus, even if a query is running under a TCB and is therefore not zIIP-eligible, prefetch reads executed by DB2 on behalf of the query are zIIP-eligible. Prefetch read CPU time, as always, shows up in the DB2 database services address space (DBM1), not in the address space associated with the DB2-accessing application process (e.g., a CICS region, or the DB2 DDF address space, or a batch initiator address space).
So there you have it. To repeat a point made up front: zIIP eligibility of SQL statement execution is a task thing, not a dynamic versus static SQL thing. A static SQL statement issued by a DDF-connected application (i.e., by a DRDA requester) will be zIIP-eligible because it will run under an enclave SRB (i.e., a preemptible SRB). A dynamic SQL statement issued by a CICS transaction program will not be zIIP-eligible, because it will execute under a TCB. Clear? I hope so.
This is the blog of Robert Catterall, an IBM Db2 for z/OS specialist. The opinions expressed herein are the author's, and should not be construed as reflecting official positions of the IBM Corporation.
Wednesday, April 29, 2015
Wednesday, April 22, 2015
A DB2 11 for z/OS Temporal Data Enhancement You Might Have Missed
When DB2 10 for z/OS introduced temporal data functionality about five years ago, one of the first use cases that jumped to mind for many people was data-change auditing: a table could be created with (or an existing table altered to add) the "system time" characteristic, and thereafter one would have, in the history table associated with the base table, a record of row changes resulting from UPDATE and DELETE statements targeting the base table.
That's nice, but suppose you want to see more than WHAT a row looked like before it was changed. Suppose you also want to see WHO changed a row, and by what means (i.e., INSERT, UPDATE, or DELETE). I was recently contacted by a programmer, working on a new application in a DB2 10 for z/OS environment, who wanted to do just that. He and his team had created a table with the system time property, and in this table they had two columns to capture the identity of a data-changer and the nature of the change: one column to record the ID of the user who added a row to the table, and another column to record the ID of any user who subsequently updated the row. The rub, as this programmer saw it, concerned delete activity. How could he capture the ID of the user who deleted a row in the table? The delete operation would cause DB2 (by way of its system time temporal capability) to move a copy of the deleted row to the base table's history table, but that "pre-delete" image of the row would contain no information about the ID of the user associated with the delete operation. The programmer thought about updating a row before deleting it, just to capture (via the UPDATE) the ID of the user that would subsequently drive the row-delete action. That didn't seem like a desirable solution to the developer, but what else could he do? On top of this problem, there was the matter of not being able to easily determine whether a DELETE or an UPDATE caused a "before" image of a row to be placed in the history table. Not a good situation.
I'll tell you, I like to give people good news, and I had good news for this guy. The good news, I told him, was that his organization was about to migrate their DB2 for z/OS subsystems to DB2 11, and new functionality in that release would address his "who did what?" requirements while also allowing his team to simplify their application code.
I'm actually talking here about capabilities added to DB2 11 after its general availability, by way of several APARs and their respective PTFs. Key among these APARs is PM99683 (the text of this APAR references the related APARs that, together with PM99683, provide the new functionality I'm about to describe). The first goody here is a new type of generated column specification, GENERATED ALWAYS AS (CURRENT SQLID). That enables code simplification: there's no need to programmatically place the ID of a data-changer in a column of a row -- DB2 11 will do it for you (and note that CURRENT SQLID is one of several special registers that can now be used with GENERATED ALWAYS -- you can read more about this in the section of the DB2 11 SQL Reference that covers CREATE TABLE).
There's more: you can also have in a table a column that is GENERATED ALWAYS AS (DATA CHANGE OPERATION). What's that? It's just a 1-character indication of the nature of a data change operation: I for INSERT, U for UPDATE, D for DELETE. Isn't that cool?
I'm still not done. In addition to the new GENERATED ALWAYS AS (CURRENT SQLID) and GENERATED ALWAYS AS (DATA CHANGE OPERATION) options of CREATE TABLE (and ALTER TABLE), there is a very handy clause that can now be added to the ALTER TABLE statement used to "turn on" versioning (i.e., system time) for a table: ON DELETE ADD EXTRA ROW. When system time activation for a table includes this clause, DB2 will add an extra row to the base table's history table when a row is deleted. That is to say, you'll get (as usual) the "pre-delete" image of the row (with the "row end" timestamp showing when the row was made non-current by the DELETE), and you'll ALSO get ANOTHER version of the row added to the history table -- this one with a 'D' in your GENERATED ALWAYS AS (DATA CHANGE OPERATION) column, and the ID of the deleting user in your GENERATED ALWAYS AS (CURRENT SQLID) column.
A little more information about this "extra row" that's added to the history table for a base table DELETE when ON DELETE ADD EXTRA ROW is in effect: first, the "row begin" and "row end" timestamps in the extra row are the same, and are equal to the "row end" value in the "as usual" row placed in the history table as a result of the DELETE (by "as usual" I mean the "before-change" row image that's always been placed in a history table when a base table row is deleted). Second, "extra rows" in the history table resulting from base table DELETEs with ON DELETE ADD EXTRA ROW in effect are NOT part of a base table query result set when that query has a FOR SYSTEM_TIME period specification, no matter what that specification is. If you want to see the extra rows added to a history table by way of ON DELETE ADD EXTRA ROW functionality, you'll need to query the history table explicitly.
The text of APAR PM99683, which you can access via the hyperlink I included a few paragraphs up from here, provides a set of SQL DDL and DML statements that very effectively illustrate the use and effects of the enhancements about which I've written in this blog entry. I encourage you to try these statements (or variations of them) on a DB2 11 test or development system at your site, to see for yourself what the new capabilities can do for you.
Temporal data support was a gem when it was introduced with DB2 10. That gem just got shinier.
That's nice, but suppose you want to see more than WHAT a row looked like before it was changed. Suppose you also want to see WHO changed a row, and by what means (i.e., INSERT, UPDATE, or DELETE). I was recently contacted by a programmer, working on a new application in a DB2 10 for z/OS environment, who wanted to do just that. He and his team had created a table with the system time property, and in this table they had two columns to capture the identity of a data-changer and the nature of the change: one column to record the ID of the user who added a row to the table, and another column to record the ID of any user who subsequently updated the row. The rub, as this programmer saw it, concerned delete activity. How could he capture the ID of the user who deleted a row in the table? The delete operation would cause DB2 (by way of its system time temporal capability) to move a copy of the deleted row to the base table's history table, but that "pre-delete" image of the row would contain no information about the ID of the user associated with the delete operation. The programmer thought about updating a row before deleting it, just to capture (via the UPDATE) the ID of the user that would subsequently drive the row-delete action. That didn't seem like a desirable solution to the developer, but what else could he do? On top of this problem, there was the matter of not being able to easily determine whether a DELETE or an UPDATE caused a "before" image of a row to be placed in the history table. Not a good situation.
I'll tell you, I like to give people good news, and I had good news for this guy. The good news, I told him, was that his organization was about to migrate their DB2 for z/OS subsystems to DB2 11, and new functionality in that release would address his "who did what?" requirements while also allowing his team to simplify their application code.
I'm actually talking here about capabilities added to DB2 11 after its general availability, by way of several APARs and their respective PTFs. Key among these APARs is PM99683 (the text of this APAR references the related APARs that, together with PM99683, provide the new functionality I'm about to describe). The first goody here is a new type of generated column specification, GENERATED ALWAYS AS (CURRENT SQLID). That enables code simplification: there's no need to programmatically place the ID of a data-changer in a column of a row -- DB2 11 will do it for you (and note that CURRENT SQLID is one of several special registers that can now be used with GENERATED ALWAYS -- you can read more about this in the section of the DB2 11 SQL Reference that covers CREATE TABLE).
There's more: you can also have in a table a column that is GENERATED ALWAYS AS (DATA CHANGE OPERATION). What's that? It's just a 1-character indication of the nature of a data change operation: I for INSERT, U for UPDATE, D for DELETE. Isn't that cool?
I'm still not done. In addition to the new GENERATED ALWAYS AS (CURRENT SQLID) and GENERATED ALWAYS AS (DATA CHANGE OPERATION) options of CREATE TABLE (and ALTER TABLE), there is a very handy clause that can now be added to the ALTER TABLE statement used to "turn on" versioning (i.e., system time) for a table: ON DELETE ADD EXTRA ROW. When system time activation for a table includes this clause, DB2 will add an extra row to the base table's history table when a row is deleted. That is to say, you'll get (as usual) the "pre-delete" image of the row (with the "row end" timestamp showing when the row was made non-current by the DELETE), and you'll ALSO get ANOTHER version of the row added to the history table -- this one with a 'D' in your GENERATED ALWAYS AS (DATA CHANGE OPERATION) column, and the ID of the deleting user in your GENERATED ALWAYS AS (CURRENT SQLID) column.
A little more information about this "extra row" that's added to the history table for a base table DELETE when ON DELETE ADD EXTRA ROW is in effect: first, the "row begin" and "row end" timestamps in the extra row are the same, and are equal to the "row end" value in the "as usual" row placed in the history table as a result of the DELETE (by "as usual" I mean the "before-change" row image that's always been placed in a history table when a base table row is deleted). Second, "extra rows" in the history table resulting from base table DELETEs with ON DELETE ADD EXTRA ROW in effect are NOT part of a base table query result set when that query has a FOR SYSTEM_TIME period specification, no matter what that specification is. If you want to see the extra rows added to a history table by way of ON DELETE ADD EXTRA ROW functionality, you'll need to query the history table explicitly.
The text of APAR PM99683, which you can access via the hyperlink I included a few paragraphs up from here, provides a set of SQL DDL and DML statements that very effectively illustrate the use and effects of the enhancements about which I've written in this blog entry. I encourage you to try these statements (or variations of them) on a DB2 11 test or development system at your site, to see for yourself what the new capabilities can do for you.
Temporal data support was a gem when it was introduced with DB2 10. That gem just got shinier.