IIPHONORPRIORITY is a parameter in the IEAOPTxx member of the z/OS data set called SYS1.PARMLIB. The value of the parameter can be set to NO or YES. Essentially, those two settings have the following meanings:
- YES - Honor the priority of zIIP-eligible tasks (per the z/OS LPAR's workload manager policy), so that in the event that such a task is ready for dispatch and the LPAR's zIIP engines are busy, allow the task to be dispatched to a general-purpose engine so that wait-for-dispatch time for the task will not be too high.
- NO - Regardless of the priority of zIIP-eligible tasks, limit the processing of such tasks to zIIP engines. If a zIIP-eligible task is ready for dispatch and the LPAR's zIIP engines are busy, require that the task wait until a zIIP engine becomes available.
Now, at first consideration a setting of IIPHONORPRIORITY=NO might appear to be an attractive alternative. After all, the raison d'être for zIIP engines is reduced cost of computing on a z/OS platform. Why not maximize that cost benefit by forcing all zIIP-eligible work to be processed by zIIP engines? In fact, however, if Db2 for z/OS is in the picture then specifying IIPHONORPRIORITY=NO can lead to significant performance degradation for an application workload (especially a DDF workload) and can even reduce offloading of work to zIIP engines. I have seen these effects with my own eyes, and I will describe them in more detail below.
Negative impact on application performance. Just about a month ago, as part of a review of an organization's production Db2 for z/OS environment, I looked through a Db2 monitor-generated accounting long report for the subsystem of interest. Data in the report, showing activity for a busy hour of the day, was ordered by connection type (meaning that the data was aggregated by connection type used by Db2-accessing programs: DDF, CICS, call attachment facility, etc.). In the DDF section of the report (connection type: DRDA), I saw that not-accounted-for time was 65% of in-Db2 elapsed time for DDF-using applications. In-Db2 not-accounted-for time is time associated with SQL statement execution that is not CPU time and not "known" wait time (the latter term referring to Db2 accounting trace class 3 wait times such as wait for synchronous read, wait for lock, wait for latch, etc.). In my experience, in-Db2 not-accounted-for time is primarily reflective of wait-for-dispatch time, and for a transactional workload (like the typical DDF workload) it should be a relatively small percentage of average in-Db2 elapsed time - under 10% is good, between 10 and 20% is marginally acceptable. 65% is an extremely elevated level of not-accounted-for time as a percentage of in-Db2 elapsed time.
I was initially perplexed by the very high level of in-Db2 not-accounted-for time seen for the DDF workload. The level of utilization of the system's general-purpose and zIIP engines, as seen in a z/OS monitor-generated CPU activity report, was not high enough to make that a prime culprit. Later, seeing unusual numbers in a Db2 monitor-generated statistics long report (more on that to come) prompted me to ask the question, "Do you folks by chance have IIPHONORPRIORITY set to NO?" It turned out that that was indeed the case, and was very likely the root cause of the very high percentage of in-Db2 not-accounted-for time observed for DDF-using applications: zIIP-eligible tasks servicing requests from Db2 client programs were spending a considerable amount of time queued up waiting for a zIIP engine to become available, with redirection to a general-purpose engine having been removed as an option via the IIPHONORPRIORITY=NO specification. A much better approach would be to have IIPHONORPRIORITY set to YES, and to have zIIP engine capacity sufficient to keep "spill-over" of zIIP-eligible work to general-purpose engines at an acceptably low level (information on monitoring spill-over of zIIP-eligible work to general-purpose engines can be found in a blog entry I posted on that topic); and, keep in mind that running zIIPs in SMT2 mode can help to minimize the zIIP spill-over rate.
Allowing some zIIP-eligible work to be directed to general-purpose engines when an LPAR's zIIP engines are busy (which should not often be the case when the LPAR is configured with an adequate amount of zIIP capacity), made possible via IIPHONORPRIORITY=YES, is a safety valve that enables consistently good performance for a zIIP-heavy workload such as that associated with DDF-using applications.
Potential non-maximization of zIIP offload. You might wonder, "How could setting IIPHONORPRIORITY to NO possibly have a reductive effect on offloading of work to zIIP engines?" I'll tell you how: when IIPHONORPRIORITY=NO is in effect, "Db2 does not allow system tasks to become eligible for zIIP processing." I have that in quotes because the words are taken directly from a page in the Db2 for z/OS Knowledge Center on IBM's Web site (on that page, scroll down to the part under the heading, "IIPHONORPRIORITY parameter"). To understand what the quoted phrase means, consider that zIIP-eligible Db2 tasks can be broadly divided into two categories: user tasks and system tasks. zIIP-eligible user tasks (which can also be thought of as application tasks) include those under which SQL statements issued by DRDA requesters - and SQL statements issued by native SQL procedures called through DDF - execute. Those zIIP-eligible tasks must be processed by zIIP engines when IIPHONORPRIORITY is set to NO. zIIP-eligible system tasks include those under which operations such as prefetch reads and database writes execute, and those tasks cannot run on zIIP engines when IIPHONORPRIORITY is set to NO (because in that case Db2 makes those tasks non-zIIP-eligible).
So, think about it: if you have sufficient zIIP capacity to keep spill-over of zIIP-eligible work to general-purpose engines at a low level, which should be the case in a production Db2 for z/OS environment (and I consider "low level" to be less than 1%), and you have IIPHONORPRIORITY set to NO, you might actually be causing total zIIP offload of Db2-related work to be lower than it otherwise would be, because prefetch reads and database writes, which are 100% eligible when IIPHONORPRIORITY is set to YES, are 0% zIIP-eligible when IIPHONORPRIORITY=NO is in effect. That's no small thing - in some Db2 systems, there is a great deal of prefetch read activity. I mentioned previously that a telltale sign that IIPHONORPRIORITY is set to NO in a z/OS system in which Db2 runs can be found in a Db2 monitor-generated statistics long report. In such a report, scroll down to the section in which CPU consumption attributed to the various Db2 address spaces is shown. In that report section, check the CPU times for the Db2 database services address space. If IIPHONORPRIORITY is set to YES, it is highly likely that a majority - often, a very large majority - of that address space's CPU time is in the zIIP column (the column in the report labeled PREEMPT IIP SRB, or something similar to that) versus the general-purpose engine column (labeled CP CPU TIME, or something similar). This is a reflection of the fact that prefetch reads and database writes very often constitute most of the work done by the Db2 database services address space, in terms of CPU resources consumed. If, on the other hand, IIPHONORPRIORITY is set to NO, it is quite likely that a majority - and maybe a large majority - of the Db2 database services address space's CPU time will be in the general-purpose engine column versus the zIIP column.
Bottom line: if you want to actually maximize offloading of Db2-related work to zIIP engines, do two things: 1) ensure that IIPHONORPRIORITY is set to YES, and 2) have enough zIIP capacity to keep the zIIP spill-over rate (the percentage of zIIP-eligible work that ends up being executed on general-purpose engines) below 1%. Refer to the blog entry to which I provided a link, above, to see how to calculate the zIIP spill-over rate using numbers from a Db2 monitor-generated accounting long report.
Is there any situation in which going with IIPHONORPRIORITY=NO might be reasonable? I'd say, "Maybe," if you're talking about a test or development environment in which optimal DDF application performance is not very important. If the z/OS system in question is one in which a production Db2 subsystem runs, I'd be hard pressed to come up with a justification for IIPHONORPRIORITY=NO. For a production Db2 system, you want IIPHONORPRIORITY=YES. You might want to check on this at your site.
So, think about it: if you have sufficient zIIP capacity to keep spill-over of zIIP-eligible work to general-purpose engines at a low level, which should be the case in a production Db2 for z/OS environment (and I consider "low level" to be less than 1%), and you have IIPHONORPRIORITY set to NO, you might actually be causing total zIIP offload of Db2-related work to be lower than it otherwise would be, because prefetch reads and database writes, which are 100% eligible when IIPHONORPRIORITY is set to YES, are 0% zIIP-eligible when IIPHONORPRIORITY=NO is in effect. That's no small thing - in some Db2 systems, there is a great deal of prefetch read activity. I mentioned previously that a telltale sign that IIPHONORPRIORITY is set to NO in a z/OS system in which Db2 runs can be found in a Db2 monitor-generated statistics long report. In such a report, scroll down to the section in which CPU consumption attributed to the various Db2 address spaces is shown. In that report section, check the CPU times for the Db2 database services address space. If IIPHONORPRIORITY is set to YES, it is highly likely that a majority - often, a very large majority - of that address space's CPU time is in the zIIP column (the column in the report labeled PREEMPT IIP SRB, or something similar to that) versus the general-purpose engine column (labeled CP CPU TIME, or something similar). This is a reflection of the fact that prefetch reads and database writes very often constitute most of the work done by the Db2 database services address space, in terms of CPU resources consumed. If, on the other hand, IIPHONORPRIORITY is set to NO, it is quite likely that a majority - and maybe a large majority - of the Db2 database services address space's CPU time will be in the general-purpose engine column versus the zIIP column.
Bottom line: if you want to actually maximize offloading of Db2-related work to zIIP engines, do two things: 1) ensure that IIPHONORPRIORITY is set to YES, and 2) have enough zIIP capacity to keep the zIIP spill-over rate (the percentage of zIIP-eligible work that ends up being executed on general-purpose engines) below 1%. Refer to the blog entry to which I provided a link, above, to see how to calculate the zIIP spill-over rate using numbers from a Db2 monitor-generated accounting long report.
Is there any situation in which going with IIPHONORPRIORITY=NO might be reasonable? I'd say, "Maybe," if you're talking about a test or development environment in which optimal DDF application performance is not very important. If the z/OS system in question is one in which a production Db2 subsystem runs, I'd be hard pressed to come up with a justification for IIPHONORPRIORITY=NO. For a production Db2 system, you want IIPHONORPRIORITY=YES. You might want to check on this at your site.
Hello Robert,
ReplyDeleteThank you for the interesting content, as always.
By chance, it helped me solve a DBM1 high cpu issue on a development subsystem.
I'm glad to know that the blog entry was useful for you.
DeleteRobert