Monday, April 29, 2013

Application Performance: When Reducing Concurrency can Improve Throughput

There's an interesting application performance situation that I've encountered several times over the past twenty years or so, most recently a couple of weeks ago. I was talking with a DBA over lunch at a regional DB2 users group meeting. He mentioned that he and some of his colleagues had been working to address a performance problem involving a purchased application that accessed a DB2 for z/OS-managed database. A particular data-changing application process was not performing well, and the cause was clear: lock contention -- especially deadlocks. Row-level locking was in use, so that contention-reducing card had already been played. One of the moves being contemplated was a further reduction in the already-low deadlock detection cycle time. I had a different suggestion: reduce the level of execution concurrency for the process. In other words, reduce the degree of multi-threading for the process. I said that serious consideration should be given to single-threading the application process. That recommendation ended up putting the DBA's team on the path toward success.

The advice I gave to my DBA friend may seem contradictory at first blush. I mean, increasing the level of multi-threading for a process is supposed to boost application performance, right? Well, yeah, if the process is designed for concurrency. Some applications are so designed, and some aren't. Design for concurrency is, of course, chiefly applicable to processes that change data, because such processes acquire X-locks on DB2 data pages or rows. Most especially, design for concurrency applies to processes that update or delete data rows, since inserts are generally not blocked by locks (I say "generally" because an insert will wait for an X-lock held by another process on the target page to be released, if the table is hash-organized -- I described hash-organized tables in an entry I posted to this blog a few months ago). If a process is such that the same set of rows can be changed by overlapping units of work (given a multi-threading scenario), it is possible that deadlocks will negatively impact application throughput (of course, if concurrently executing data-changing processes act on different sets of rows -- as is very often the case -- then the level of execution concurrency can go very high with no lock contention problems whatsoever).

If an application process driving a lot of deadlocking is one that was developed in-house by your organization's programmers, it may be that a design modification could improve concurrency by changing the order in which data-changing operations occur (for example, row-changing actions might be moved to the "end" of a unit of work, or a sequence of change A / change B / change C might be altered to B-A-C or otherwise re-sequenced). If, on the other hand, the application in question is a product purchased from a vendor (as was the case for the DBA to whom I spoke at the user group meeting), code modification likely won't be an option. Even for an in-house-developed application, changing a program's design to alleviate a deadlock problem may not be feasible or fruitful. What then?

That's when your thoughts should turn to reducing the level of multi-threading for the process -- perhaps as far as going the single-thread route. I know for a fact that reducing concurrency of execution for a process can improve performance, even if you go all the way to one-at-a-time. Here's how: if your deadlock detection time (set via the DEADLOK parameter in your IRLM PROC) is at the default of 1 second, two deadlocked transactions could wait for a second (that is, 1000 milliseconds) before one is selected by DB2 to be the "loser." That transaction's work then gets rolled back, and it has to be re-tried (if the associated program has retry logic). Now, suppose that the process is single-threaded. If the average transaction time is, say, 50 milliseconds, and a transaction gets there "second" (i.e., arrives when another instance of the process is in the midst of execution, requiring a wait because of the single-thread situation), it will have to wait a maximum of 50 milliseconds before it can get going, AND there won't be any rollbacks or required retries. Much better, right?

"Wait," you might say, "'Better' depends on the transaction arrival rate," and you'd be right in focusing on that as a key factor -- that, and average transaction elapsed time. If the average elapsed time for a transaction that you want to single-thread is 50 milliseconds, then an arrival rate of up to 20 transactions per second should be OK in terms of avoiding big back-ups in a single-threaded scenario. That may not sound like a very high transaction rate to you, but I've seen that some processes that have had deadlocking problems when the level of multi-threading is high in fact have transaction arrival rates that make single-threading a very viable throughput-boosting technique. And, keep in mind that reducing the level of multi-threading for a process to two or three -- as opposed to one -- might be sufficient to alleviate a deadlock problem while providing support for a higher transaction arrival rate than could be handled with single-threading in effect.

How might one go about reducing multi-threading for a process in order to boost throughput through elimination (or near-elimination) of deadlocks? There are several alternatives, depending on the nature of the process and other database-accessing applications running in the system. If the process with deadlock issues is the only one accessing a table or set of tables when it executes, single-threading could be achieved through LOCK TABLE IN EXCLUSIVE MODE statements. If other processes require access to the target table or tables while the deadlocking transactions execute, one-at-a-time locking can be effected in a more granular fashion. Suppose, for example, that a process changes some rows in a set that have a common identifier, such as a customer number. If, at the beginning of a transaction, a column in a row with a given customer number in a "master" or "control" table is updated to equal itself (e.g., UPDATE CUST_MASTER SET COL_A = COL_A WHERE CUSTNO = :hvar), a transaction can change data in rows with that customer number without deadlocking with another instance of the process -- this is a very low-cost way to get a serializing X-lock on a row or page.

If the process is executed as a CICS transaction, specifying THREADLIMIT=1 and THREADWAIT=YES in the associated DB2ENTRY CICS RESOURCE will result in one-at-a-time execution.

For a DRDA-using client-server process, the number of connections to a DB2 for z/OS system for an application can be controlled on the client side. DB2 10 for z/OS provided a server-side means of controlling the number of connections from an application server: profile tables (something about which I blogged a few weeks ago). DB2 server-side control of client connections is even more dynamic and comprehensive if you have IBM InfoSphere Optim Configuration Manager.

Whatever your preferred means of achieving single-threading (or just a reduced level of multi-threading), make sure that this is in your application tuning tool box. Under certain circumstances, you really can get more throughput for an application process by having less of it executing in the system at one time.

Tuesday, April 23, 2013

DB2 for z/OS: Goodbye "Security" Views, Hello Row Permissions and Column Masks

It's an age-old requirement -- one that's been around as long as DB2 itself: ensure that a particular group of users can see only certain rows in a given table, and/or ensure that values in a certain column of the table are transformed before being returned to users in the group. For example, employees who work in a branch office of an organization might be permitted to see information in a CUSTOMER table for customers assigned to that branch, but prohibited from seeing information for customers assigned to other branches; furthermore, these employees are permitted to see an indication of the relative magnitude of the income of a customer assigned to their branch, but not actual income values.

For a long time, such needs were commonly addressed through the creation of so-called "security" views. The mechanism could be implemented in something like the following manner:
  1. A view that filters out rows that do not have the identifier of an employee's assigned branch in the BRANCH column is defined on the CUSTOMER table. The view might be distinguished by a high-level qualifier (e.g., BRANCH.CUSTOMER, versus PROD.CUSTOMER for the underlying table), or it might have a different unqualified name (e.g., BRANCH_CUSTOMER). Such a view could also transform and abstract information in the table's INCOME column as required.
  2. The SELECT privilege on the view is granted to an appropriate secondary authorization ID (or set of secondary IDs), and/or (in the static SQL case) the EXECUTE privilege on packages of programs that access the view is granted to that secondary authorization ID (or set of secondary IDs).

That seems simple enough, but as many DBAs and security administrators will attest, it's not an ideal solution. Downsides of the view approach to row- and column-level data security include:
  • The name of view must be different from the name of the table on which it's defined, and different from any other existing view in the catalog of the DB2 system. This can complicate things from an application perspective. If the unqualified name of the view is what makes the name unique (e.g., a view BRANCH_CUSTOMER is defined on the table named CUSTOMER), SQL statements targeting the view will have to explicitly reference the unique unqualified view name, or they will have to reference an alias that has the same unqualified name as the underlying table but which resolves to the different unqualified name of the view (and that alias will of course have to have a high-level qualifier that's different from the underlying table's high-level qualifier, since the unqualified names of the alias and the table are identical). If the view's high-level qualifier is what makes its name unique (e.g., a view BRANCH.CUSTOMER is defined on the table PROD.CUSTOMER), packages of programs with static SQL statements referencing unqualified object names will have to be bound into different collections using different bind-time qualifiers, and the right collection will have to be selected (likely via SET CURRENT PACKAGESET) when users need access to the view versus access to the underlying table. Also for a view distinguished from an underlying table by way of a different high-level qualifier, users issuing dynamic SQL statements (perhaps generated by an application or a tool) will have to utilize the correct fully qualified object name to access the view versus the table, or a SET CURRENT SQLID = 'BRANCH' will have to be issued to ensure that the right high-level qualifier is added to unqualified object names (and that may or may not be feasible, depending on the application in question). Any way you slice it, the requirement for a unique name for the view creates a hassle factor when it comes to object access and management.
  • As the number of different row and column restrictions increases, the number of security views can become rather large. In some cases, security views can proliferate like crazy. Not only can this be a database administration headache, it can also get in the way of making database design changes that could improve application performance or functionality. I recall a situation, a few years back, in which a potentially performance-enhancing database design change was vetoed at a certain DB2 for z/OS site because it would require making a change to a view. There were views defined on other views at this site (primarily security views), going up several layers, and the view that would be changed was at the ground level, so to speak. There was so much concern about the effect that a modification of this foundational security view might have on other views that we had to go with a less desirable database change that would leave the view definition intact.
  • The "super user" loophole. Security views can do a pretty good job of restricting data access at the row and/or column level for most users, but what about "super users," such as those who have SYSADM authority on the DB2 subsystem? You can have all the security views you want, but the underlying table will continue to exist as before, and someone with SYSADM authority can view all the data in that table. Knowing who your SYSADMs are, you may be comfortable with that, but are your auditors?

Kind of reminds you of a lot of product-pitching TV commercials you've seen, right? You know, the ones in which a flustered individual exclaims...

"There must be a better way!"

Well, Mr. or Ms. DB2 person, there is a better way. It's called row permissions and column masks, and it was introduced with DB2 10 (new-function mode). It's beautiful in its simplicity. In the remainder of this blog entry I'll briefly describe the feature and how you can put it to work, and I'll provide some simple usage examples. You can read more about row permissions and column masks in the DB2 10 for z/OS Information Center on the Web.

A row permission is implemented by way of the SQL statement CREATE PERMISSION. Through this statement, you specify column-filtering predicates for a table and associate these with a primary or secondary authorization ID, or with a role (the association is accomplished via new scalar functions that include VERIFY_GROUP_FOR_USER and VERIFY_ROLE_FOR_USER). Similarly, you can use the statement CREATE MASK to specify column-value transformation actions in the form of CASE expressions, and associate these with authorization IDs or roles through the aforementioned VERIFY functions.

To show you how permissions and masks can be utilized to implement data security safeguards at the row and column level in a DB2 database, I'll return to the example CUSTOMER table that includes, among other things, a column containing the identity of a branch to which a customer is assigned, and a column containing that customer's income. This is a very small and very simple table, as indicated by the result set of a SELECT * statement targeting the table:

CUSTNO  CUSTNAME    BRANCH       INCOME
---------+---------+---------+---------
111222  JONES       A             45000
222333  SMITH       A             57000
333444  TERRY       B            110000
444555  WILSON      B            230000


If we want to make sure that employees in the respective branch offices can only see data for customers assigned to their branch, we can create a permission like the one below. It's assumed here that employees' individual authorization IDs have been connected to the RACF (or equivalent) groups STAFF and MGR (the former for professional staff and the other for managers), and that columns EMP_BRANCH and EMP_AUTHID of table EMP_INFO contain employees' branch office assignments and system authorization IDs, respectively). 

CREATE PERMISSION BRANCH_EMP ON CUSTOMER              
 FOR ROWS WHERE                                       
  VERIFY_GROUP_FOR_USER(SESSION_USER,'STAFF','MGR') = 1
  AND                                                 
  BRANCH = (SELECT EMP_BRANCH FROM EMP_INFO           
  WHERE EMP_AUTHID = SESSION_USER)                    
 ENFORCED FOR ALL ACCESS                              
 ENABLE;


This row permission is activated through an ALTER TABLE statement, as follows:

ALTER TABLE CUSTOMER       
ACTIVATE ROW ACCESS CONTROL;


With the row permission defined and activated, a user who is assigned to branch A and who has 'STAFF' (or 'MGR') as a secondary authorization ID will get the following result set if the statement SELECT * FROM CUSTOMER is executed:

CUSTNO  CUSTNAME    BRANCH       INCOME
---------+---------+---------+---------
111222  JONES       A             45000
222333  SMITH       A             57000
 


Now, suppose there is also a requirement that non-management employees in branches not see actual customer income figures. Suppose that instead they are to see a number 1, 2, 3, or 4 in the INCOME column of a SELECT FROM CUSTOMER result set, with the number indicating a relative income level for a customer. If branch managers are allowed to see actual customer income figures for customers assigned to their branch, the data transformation requirement could be addressed through a column mask defined as follows:

CREATE MASK INCOME_MASK ON CUSTOMER                     
 FOR COLUMN INCOME RETURN                               
  CASE                                                  
   WHEN (VERIFY_GROUP_FOR_USER(SESSION_USER,'MGR') = 1) 
    THEN INCOME                                         
   WHEN (VERIFY_GROUP_FOR_USER(SESSION_USER,'STAFF') = 1)
     THEN CASE                                          
      WHEN INCOME >= 200000 THEN 4                      
      WHEN INCOME BETWEEN 100000 AND 199999 THEN 3      
      WHEN INCOME BETWEEN 50000 AND 99999 THEN 2        
      WHEN INCOME < 50000 THEN 1                        
     END                                                
    ELSE NULL                                           
  END                                                   
 ENABLE;


An ALTER TABLE statement activates column masking for the table:

ALTER TABLE CUSTOMER     
ACTIVATE COLUMN ACCESS CONTROL;


And now, when a non-management employee assigned to branch B executes the statement SELECT * FROM CUSTOMER, the following result set will be returned:

CUSTNO  CUSTNAME    BRANCH       INCOME
---------+---------+---------+---------
333444  TERRY       B                 3
444555  WILSON      B                 4


But when a manager in branch B issues the same statement, the result set will look like this:

CUSTNO  CUSTNAME    BRANCH       INCOME
---------+---------+---------+---------
333444  TERRY       B            110000
444555  WILSON      B            230000


Here's why I so prefer permissions and masks over views when it comes to implementing row- and column-level data security in a DB2 for z/OS database:
  • No more "name game" headaches. Creating and activating row permissions and column masks does NOT require users and programmers to reference anything other than the actual target table. Instead of some access being to a security view and other access being to the underlying table as necessitated by data protection requirements, access is always to the CUSTOMER table (in my example) -- the permissions and masks defined on the table take care of filtering rows and transforming data values as needed.
  • Expect a less-tangled data security set-up. The flexibility and power of the CREATE PERMISSION and CREATE MASK statements should enable you to implement the row- and column-level data access controls that your organization needs without the proliferation of related database constructs that you might have on your hands were you to go the security view route. Your permission- and mask-controlled data protection scheme is likely to be simpler in construction, easier to understand, and easier to manage than one that relies on security views.
  • Super users get no special treatment, if that's what you want. When you create and activate a row permission or a column mask for a DB2 table, it applies to ALL access to that table by ALL users. Again referring to the example I've used throughout this blog entry, with the permission I called BRANCH_EMP defined and activated for the CUSTOMER table, will the table's creator be able to retrieve rows from CUSTOMER? How about someone with SYSADM authority? No and no -- unless 'STAFF' or 'MGR' happens to be an authorization ID (either primary or secondary) of the table creator or the person with SYSADM authority. Ah, but can't a user with SYSADM authority gain access to CUSTOMER data simply by executing the statement SET CURRENT SQLID = 'MGR' (or 'STAFF')? Yes, but only if the ZPARM called SEPARATE_SECURITY (new with DB2 10) is set to NO (which happens to be the default). If the value of SEPARATE_SECURITY is YES, someone with SYSADM authority can only change his or her current SQL ID to an ID that is among his or her primary and secondary IDs. Another way to tighten things up is to associate row permissions and column masks with roles, as opposed to authorization IDs -- SET CURRENT SQLID cannot be used to change one's secondary authorization ID to the name of a role.

A lot of DB2 people are drawn to new DB2 features that enhance system performance. I get that. It's always fun to tune a DB2 system and applications. Data security may not be as exciting to you as performance tuning, but it's more important now to your organization than it's ever been before. I'm telling you, DB2 10 row permissions and column masks are a big deal -- a major advance in DB2's already-strong data protection capabilities. Get to know this technology, and leverage it at your site. Your auditors (and, maybe, even your CEO) will appreciate your efforts.