Wednesday, January 26, 2011

A Note About IBM's DB2 .NET Data Provider

A lot of people like to use DB2 (on whatever platform -- mainframe, Linux/UNIX/Windows, IBM i) as a data server for .NET applications running on Windows servers. IBM facilitates this architecture with the DB2 .NET Data Provider, which extends DB2 support for the ADO.NET interface. The DB2 .NET Data Provider is included with several of IBM's data server client and driver offerings (more on this momentarily). A DB2 for z/OS DBA friend of mine recently asked, "How can I tell if the DB2 .NET Data Provider is 'there' on an application server?" As it turns out, there's an app for that, and I'll tell you about it in this post (many thanks to Brent Gross, a senior member of IBM's DB2 development organization, who told ME about this).

The app to which I refer is called testconn. It's a .NET application that ships with all IBM client packages. testconn will actually drive a DB2 connection though the .NET layer. There are versions of testconn for each .NET Framework: 1.1, 2.0, and 4.0. To run testconn for a database (e.g., the sample database) that is local to the app server, you'd type the following (if you were looking to verify that the IBM DB2 Data Provider for .NET Framework 2.0 is installed on the server):

testconn20 database=sample


What you specify after testconn20 (or testconn40 or whatever) is a .NET connection string. The testconn tool will use that string to connect to the target database. If all you want to do is check to see if the driver is there, you can use any name for the database. If it's not a valid database name, testconn will report an error indicating that it cannot connect to the database, but it will first print out all of the driver information. Here is an example of a testconn execution for a remote database:

testconn20 database=robdb;server=myserver.com:50000;userid=robert;password=freddy


Here is some sample output from an execution of testconn:

E:\>testconn20 database=sample

Step 1: Printing version info
        .NET Framework version: 2.0.50727.3615
        DB2 .NET provider version: 9.0.0.2
        DB2 .NET file version: 9.7.3.2
        Capability bits: ALLDEFINED
        Build: 20100823
        Factory for invariant name IBM.Data.DB2 verified
        Factory for invariant name IBM.Data.Informix verified
        IDS.NET from DbFactory is Common IDS.NET
        VSAI assembly version: 9.1.0.0
        VSAI file version: 9.7.0.489
        Elapsed: 0.5


Step 2: Validating db2dsdriver.cfg against db2dsdriver.xsd schema file
        Elapsed: 0.015625

Step 3: Connecting using "database=sample"
        Server type and version: DB2/NT 09.07.0003
        Elapsed: 1.890625

Step 4: Selecting rows from SYSIBM.SYSTABLES to validate existance of packages
   SELECT * FROM SYSIBM.SYSTABLES FETCH FIRST 5 rows only
        Elapsed: 0.21875

Step 5: Calling GetSchema for tables to validate existance of schema functions
        Elapsed: 0.40625


Test passed.


And there you have it. So, if in doubt: testconn.

Now, I mentioned that the DB2 .NET Data Provider is included with a number of IBM's data server clients and drivers. You can get more information about these offerings at this url:

http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.swg.im.dbclient.install.doc/doc/c0023452.html

I'll tell you that the preferred client package for you is likely to be the IBM Data Server Driver Package (also known as the ds driver). It's lightweight and easy to distribute.

Got DB2? Got .NET apps? They go great together. Check it out, if you haven't already.

Tuesday, January 11, 2011

DB2 for z/OS: CATMAINT and Concurrency

In the years since mainframe DB2 data sharing was introduced with DB2 for z/OS Version 4 (mid-1990s), I've done a lot of presenting and writing about the technology (e.g., a blog entry from a couple of years ago that provided an overview of the topic). From the get-go, one of the primary benefits of DB2 data sharing was the opportunity to achieve ultra-high availability. A lot of people have long understood that when you have DB2 operating in data sharing mode, planned outages for purposes such as software maintenance upgrades can be virtually eliminated: you apply fixes to your DB2 load library, then quiesce work running on member X of the data sharing group (allowing work to continue flowing to other members), then stop and restart member X to activate the maintenance, then resume the flow of application work to member X, then quiesce work on member Y, and repeat the preceding steps until all members are running the updated DB2 code (as this "round-robin" process progresses, the group runs fine with some members at maintenance level "n" and some at level "n+1").

Still, there was this widely accepted notion that DB2 data sharing wouldn't deliver 24X365 availability every year, because during some years (generally speaking, once every two or three years) you'd migrate the data sharing group to a new release of DB2, and you'd need a group-wide outage to do that -- right? One of the steps involved in migrating a DB2 system to a new release of the code is the running of an IBM-supplied job, after initially starting DB2 at the new release level, that executes the CATMAINT utility. CATMAINT effects some structural changes to the DB2 catalog and directory (some new tables, some new columns in existing tables, some new and/or altered indexes on tables), and you can't have DB2-accessing application work running while THAT'S going, can you? These concerns linger in the minds of plenty of mainframe DB2 people to this day, but I'm here to tell you that they shouldn't. You CAN keep your application workload running in a DB2 data sharing group, even through an upgrade to a new release of DB2.

Here's the deal: a long time ago (and I'm talking like early 1990s), it WAS recommended that you not run application work while running CATMAINT to update the DB2 catalog and directory to a new-release structure. That was OK with most folks. After all, you had to stop the application workload anyway during a DB2 migration, because you'd of course stop your DB2 Version N subsystem and then start DB2 at the Version N+1 release level. With the flow of DB2-accessing work temporarily stopped anyway, why not leave it stopped just a little longer while CATMAINT did its thing (typically well under an hour -- and CATMAINT elapsed time went down significantly starting with the DB2 V8 to V9 migration process).

Along came data sharing, and implementers of this technology by and large stayed with the old practice of not having application work running during CATMAINT execution (and throughout this entry, I'm referring to CATMAINT being used to effect catalog and directory changes as part of a DB2 release migration, as opposed to the other CATMAINT options introduced with DB2 9 to facilitate large-scale changes of VCAT or OWNER or SCHEMA name for objects in a DB2 database). Many people just didn't think about doing otherwise, but as the need for super-high availability became increasingly prevalent, more and more DB2 administrators started to explore the possibility of running DB2-accessing application programs during CATMAINT execution, and found that it was indeed technically possible. As time goes by, continuing the flow of application work during CATMAINT execution is becoming more common at DB2 sites, especially at sites running DB2 in data sharing mode. 24X365Xn (with "n" being greater than 1 and including years during which DB2 is migrated to a new release) really is possible with a DB2 data sharing group.

Now, is this deal completely catch-free? Not entirely. Because CATMAINT does change some objects in the DB2 catalog and directory, it can make these objects temporarily unavailable (the particular changes made by CATMAINT will vary, depending on the DB2 release to which you're migrating). The duration of that unavailability is likely to be pretty brief, particularly so since the big CATMAINT speed-up delivered with DB2 9, but if an application program happens to require access to one of these objects while it's being changed by CATMAINT, the program could fail with a timeout or "resource unavailable" error code. Similarly, the CATMAINT utility itself could fail due to contention with application work. If that happens, it's NOT a disaster: you'd terminate the job with the TERM UTILITY command and then re-execute CATMAINT from the beginning (you'd actually resubmit migration job DSNTIJTC, which executes CATMAINT).

So, some disruption is possible. To minimize conflict, consider the following:
  1. Run CATMAINT during a period of relatively low application workload volume. At some sites, batch activity is halted during the time of CATMAINT execution, so that only online programs are accessing DB2 (this can be quite do-able, as the batch suspension may have to be in effect for only a few minutes).
  2. Avoid executing DDL statements (e.g., CREATE TABLE) while CATMAINT is running.
  3. Avoid package bind and rebind actions while CATMAINT is running.
These same guidelines apply with respect to running application work during the execution of the CATENFM utility, which makes catalog and directory changes necessary for the migration of a DB2 for z/OS subsystem from Conversion Mode to New Function Mode within a release level.

The bottom line: if you have a DB2 data sharing group and you're thinking that you'll have to stop application access to DB2 as part of the migration to a new release of the DBMS, think again. You CAN keep an application workload going while CATMAINT is running on one of the members of the data sharing group (and that includes running application work on the member on which CATMAINT is executing), and you can do the same with regard to CATENFM execution. If you want to take a brief application outage (referring to DB2-accessing programs) while CATMAINT is running, you can of course do that. Just know that you don't HAVE to. Figure out what's appropriate for your organization, and proceed accordingly.