Printer Friendly

PC And Server Y2K Compliance: How Much Is Enough?

The activities for a mainframe Year 2000 project are (or should be) pretty well defined by now, based on the enormous attention given to the "two digit" problem in legacy mainframe applications. "Assess, remediate, and test" would be one high-level view of the project with focus on the most critical applications first. Firms are now starting to turn their attention to non-mainframe technology and are discovering a less structured situation. The following issues soon become apparent:

1. The systems are distributed and a total population may be unknown.

2. Not only are applications to be studied, but also the components below the applications are suspect.

3. These components, namely operating systems, BIOS, and real time clocks are sourced from multiple vendors.

4. The desktop and server environments are diverse in terms of upgrades, replacements, and user applications.

These issues lead to an important question: How much is enough? How much of the system population must be dealt with and to what level of detail? The answer will be quite specific to each company, but the following view is suggested as a way to structure a decision. It is based on the two subjective considerations of "Compliance" and "Due Diligence." It is a risk analysis.

Compliance has been given numerous definitions, and all are versions of time and date correctness at various time periods for various elements of a system. Due Diligence is a legal defense perspective that recognizes the litigious environment being predicted for the Year 2000 performance of firms. This certainly is a risk versus cost dimension.

Compliance: Critical Success Factors

The following are critical considerations for developing any company's view of PC and server Y2K compliance:

1. Select PC and server Y2K compliance criteria, which helps manage risks to an acceptable level. Some examples of Y2K definitions are presented below to demonstrate the range of expressions available and why compliance is very subjective.

SEC Staff Bulletin No. 5 defines the Y2K issue as follows: "Many existing computer programs use only two digits to identify a year in the date field. These programs were designed and developed without considering the impact of the upcoming change in the century. If not corrected, many computer applications could fail or create erroneous results by or at the Year 2000. The Year 2000 issue affects virtually all companies and organizations."

The popular compliance checker RightTime provides the following definition for PC hardware Year 2000 compliance: "PC hardware year 2000 compliance means that the CMOS RTC hardware year--as provided via the BIOS--increments from 1999 to 2000 and that a valid 2000+BIOS date will remain correct after rebooting." (www.RighTime.com/pub/test2000.txt)

IBM's statement relative to use of the term "Year 2000 ready" is as follows: "By 'ready,' IBM means those products will function correctly into the 21st century, provided all other computer products used with them exchange accurate date information." (www.ibm.com/IBM/year2000/facts/position.html).

SunSoft's Year 2000 Compliance Statement reads as follows: "A product certified as being y2000 compliant will not produce errors in date data related to the year change from December 31, 1999 to January 1, 2000 and date representation by the product will be accurate into the future until the year 2037. The handling of leap years will be done correctly. The compliant product will define specific, non-ambiguous representation, handling, and interpretation of centuries represented by two digits, if the product allows such representation." (www.sun.com/y2000/statement.html).

2. Develop Your Y2K Certification Criteria. There are four major points of failure for time and date issues: user applications; commands; application programming interfaces (APIs); and the interrupt structure provided by the BIOS, interval timer, and Real-Time Clock (RTC). These points of failure fall into two major categories, the Application Layer and the Platform Layer. Note that within the platform layer, APIs are implemented in the language libraries and in the operating systems. However, the vast majority of the code is in the operating systems. Software interrupts are implemented in the BIOS and hardware interrupts are implemented in the hardware. The APIs serve as windows through which "well-behaved" applications and commands access functions that reside at the platform layer. Applications that are not well behaved directly address hardware.

Now, keeping in mind the four possible points of failure, look again at the above examples and focus on what your Y2K certification criteria needs to be to minimize Y2K risks for your organization.

What About Time?

The above definitions focus only on date (month, day, and year) and ignore time (hour, minute, and second) related issues. Does your organization need to focus on correctness of time?

What About Duration?

The above definitions concentrate for the most part only on the years 1999 and 2000. Applications are likely to span a twenty to thirty year horizon driven by long term contracts, regulatory agencies, and the general needs of the business. These organizations need to focus on correctness of time and date information across the duration of these commitments.

What About Formats And Variations?

The above definitions focus only on local time. This may be acceptable, if and only if your organization does business exclusively in one time zone. Interstate and global organizations need to focus on the correctness of local time, Coordinated Universal Time (CUT), leap year, time zones, and daylight savings time.

What About Programming Interfaces?

As shown above, applications and system commands access functions within the platform layer through Application Program Interfaces (APIs). On an operating system like Windows 95 and NT, there are approximately 40 time and date-related APIs defined by Microsoft, IBM and the ANSI, POSIX, SPEC 1170, and XPG4 standards organizations. Unless these 40 APIs function correctly and absolutely as defined by the standards--across the span of time relevant to your organization--the probability is very high that you will experience platform-related Y2K failures.

There are many techniques that can be used to test for Y2K compliance. Real-Time Clocks first appeared in IBM compatible PCs in 1984. Since that point in time, it has been common knowledge that many Real-Time Clocks are flawed relative to Year 2000 rollover. Therefore, several years ago, some BIOS vendors built logic into their products that corrected selected errors in some Real-Time Clocks. Unfortunately, there are many BIOSs that have known time and date-related flaws. Therefore, correcting code has been added to some operating systems that correct known errors in selected BIOSs and Real-Time Clocks.

Testing the RTC is at best only a partial test for Y2K compliance. It has some relevance on older systems, PCDOS, and/or Windows 3.x systems, but may have no relevance on newer systems. At the very best, it is a very high risk testing methodology because testing the RTC requires no tests of the published APIs. Over 95 percent of the time and date failures are in the APIs and have nothing to do with the Real-Time Clock.

Some Y2K testing procedures rely only on the use of system commands. However, system commands are typically designed to mask known operating system API flaws and at best system commands exercise only a small percentage of the known API interfaces. Therefore, it is considered very high risk to rely on system commands to validate Y2K Compliance.

Real-Time Testing Using APIs

Any reliable testing methodology must begin by testing all APIs because it is impossible to predict which APIs are used by your applications and system commands. 32-bit platforms are designed to perform time and date-related calculations, and manipulations correctly from 1/1/1970 00:00:00 through 1/18/2038 22:14:09 (2147483647) Coordinated Universal Time. If you exercise each of the APIs across this span of time, the distribution of time and date failures will vary considerably by platform. 1999 and Year 2000 are only two sampling periods, which commonly display failures. Many systems pass Year 1999 and Year 2000 tests, but display fatal errors during other time periods. It is very high risk not to adopt a testing methodology that tests all the APIs across the entire spectrum of time that is important to your business units.

Rollover Testing

There are three possible rollover failure scenarios.

1. Dynamic rollover failure: Failure to dynamically recognize that rollover has occurred in real-time mode and make the proper adjustments. Generally speaking, these failures will be operating system failures. They are the most serious types of rollover failure. Dynamic Rollover tests are properly performed by setting the system time on or before 12/31/1999 23:59:59, putting the system to sleep until after rollover has occurred, and then exercising the APIs, which could possibly be impacted by rollover. Most, but not all of the 40 APIs mentioned in the preceding discussion have some possibility of failure due to dynamic rollover.

2. Reboot failure: Failure to retain proper time and date information on reboot without power-off and power-on. If the system passes all of the dynamic rollover tests and passes the Real-Time Clock test, it should not fail this test.

3. Power-on failure: Failure to retain proper time and date after power-off and power-on. This failure should only be seen on systems that fail the Real-Time Clock Test or on systems in which the battery used to power the Real-Time Clock is dead. Before concluding that the system will actually fail after power-on, a simple manual test using an appropriate system command should be performed to verify pass or fail. The reason that a manual test is required is that, even though the Real-Time Clock displayed an error before power-off, the BIOS and/or operating system may have corrected this error on reboot. Unfortunately, this test must be performed manually.

Y2K testing is all about managing risk. The testing methodology you choose will directly impact the level of risk to which your organization is exposed. Clearly, no testing methodology will identify 100 percent of the risks. However, choosing the right methodology will dramatically improve your probability of success.

Ralph B. Johnson is vice president of sales and marketing at Client/Server Technologies (Alpharetta, GA).
COPYRIGHT 1999 West World Productions, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1999, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Industry Trend or Event
Author:Johnson, Ralph P.
Publication:Computer Technology Review
Date:Aug 1, 1999
Words:1672
Previous Article:Strategies for Y2K Contingency Planning.
Next Article:USING MEDIA CONVERTER TECHNOLOGY FOR COPPER TO FIBRE CONNECTIVITY.
Topics:


Related Articles
Getting in gear for the year 2000.
Top 10 technologies lists.
Testing testing.
HOW TO DRAFT A CUSTOMER-FRIENDLY Y2K STATEMENT.
TECHNOLOGY FOR BUSINESS: Squashing the millennium bug.
Debugging Y2K.
Multi-Vendor Y2K Site: Too Little, Too Late?
Strategies for Y2K Contingency Planning.
When Compliance Is Not Enough: How The Y2K Bug Affects the RTC Chip.
Asian Server Market Shatters Revenue Records In 1999, IDC Says.

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters