Printer Friendly

Anatomy of a recovery.

Flickering lights just before noon was the firs tindication of a power problem one Monday last August. Less than two hours later, a fire at a Consolidated Edison substation in lower Manhattan caused a widespread power failure over 70 square blocks and left us without the use of our headquarters and data center for six days.

As a fire insurance underwriter with 60,000 policy holders, New York Property Insurance Underwriting Association (NYPIUA) has significant experience in managing risk. It is also the reason we have had a disaster recovery plan in place for the past several years.

The biggest uncertainty we faced at the onset of the crisis was determining how long the outage would last. Our major concern was the need to process policy renewals and deposit cash into the bank.

Utility officials at first said we could expect power back by Tuesday, then later told us Wednesday.

Because we can tolerate a certain amount of data entry delay, we had some leeway in dealing with the crisis and did not need to make an immediate disaster declaration. An IBM Business Recovery Services customer representative had, in fact, called us to ask if we needed assistance soon after the first reports of the fire were carried by the media.

We set up a local command center in a building with power near our headquarters to coordinate activities during the crisis. We also arrange to pick up our mail at the post office and contracted with an answering service to provide phone coverage.

Power still down

By Wednesday, however, it was apparent the utility couldn't predict when power would be restored. We notified our board of directors and the state insurance department we were activating our disaster recovery plan.

A team of six of our information systems staff met at our darkened offices to gather backup application and data tapes. Since the team was literally working in the dark, one of the first additions made to our formal recovery plan was a note to secure additional emergency lanterns and flashlights.

Our recovery team then loaded the tapes into a rented van and set out for IBM's hot site facility in Franklin Lakes, N.J., a short drive from Manhattan.

At about 7 p.m. that evening, the NYPIUA staff began restoring the MVS operating system for our IBM 4381 mainframe, completing that phase of the recovery by midnight. A contingent of systems programmers, network specialists, administrators and managers were waiting at the hot site to assist us.

Data restoration

Restoration of the remaining data began the next morning and by 5 p.m. Thursday the system was fully restored.

Online production proceeded to smoothly that on Friday morning we decided to call in more NYPIUA staff to Franklin Lakes to process new business applications and endorsements. This was in addition to our ongoing objective of processing renewals and cash.

Batch processing resumed at the hot site Friday night, with additional cycles run over the weekend.

Power was restored to our headquarters late Sunday but we chose to remain at the hot site all day Monday while our data processing staff ran checks on the system in New York. Normal operations resumed the following day.

In all, we used six hot site days.

Our success during the crisis was due to extensive planning and preparation and the deciation of both the NYPIUA and IBM hot site staff.

For a number of years, we had operated without a recovery plan. At the time, the thinking was our small size negated the need for a formal plan. Cost was also a concern.

But over the last 10 years, we have evolved from a paper-oriented environment to one that is very sophisticated and technology-dependent.

Almost all of our 64 employees have online terminals at their desk and constantly interact with the computer system. We realized the importance of protecting the integrity of the system and its impact on the bottom line.

Our original disaster coverage was provided through a service bureau but we had become increasingly dissastisfied with their performance.

In mid-1989, we investigated disaster recovery offerings by several vendors and decided on IBM. Its Business Recovery Services was new, but the company had more than 30 years experience managing disaster recovery internally.

We came away with a more comprehensive plan, with a higher level of service, at a lower price.

What was espcially attractive was the flexibility of its terms and conditions, and the fact no disaster declaration fees are assessed.

The theory behind this policy is crucial time may be lost while an executive debates te severity of a disaster and whether or not it justifies incurring the declaration fee. This delay introduces unnecessary risk in making this decision.

Some other recovery vendors charge declaration fees from $5000 to $25,000, regardless of whether or not the recovery plan is ultimately implemented.

Plan assumptions

The recovery plan we developed was based upon the assumption of limited disruptions to our data processing capabilities, such as fires and floods. We never anticipated losing the entire system, especially due to an extended power outage.

Our plans have always stressed, however, the requirement to continue operations in the face of any catastrophe in order to give our clients the impression that it is "business as usual." And it worked out that way. We had a plan and we were organized.

I'd say 85 percent of the recovery went according to expectations and we improvised the rest. We really didn't miss a beat.

The feedback I received from my staff upon their return to New York was most upbeat. Also, my staff's comfort level was extremly high because they were essentially operating our system--just the location was different.

We didn't have to make any adjustments in our procedures because the system programmers at Franklin Lakes configured the system to be a mirror-image of our own.

In a test this past October at the Franklin Lakes hot site, we built upon our experiences in August and finalized some technical aspects of the plan. We will also be updating and testing our recovery plan at least twice a year and rotating all members of our IS staff through the process.

Perhaps the greatest testament of just how well the recovery went was when one of our public board members remarked that if we hadn't notified him about the crisis, he would never have known.
COPYRIGHT 1991 Nelson Publishing
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1991 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Disaster Recovery
Author:Rusnak, John
Publication:Communications News
Date:Apr 1, 1991
Previous Article:Why attend conferences when facing tight budgets?
Next Article:Remote testing boosts effectiveness of bank's disaster recovery program.

Related Articles
Disaster recovery in the new decade: retrofit answers will not make it in the '90s.
Taking the disaster out of recovery.
Six steps to disaster recovery.
Protecting million dollar memories.
Vaulting provides disaster relief.
A disaster waiting to happen.
Sofftek DR Manager new disaster recovery software. (VIRUS NOTES).
Changing approaches to data protection.
Overcoming recovery barriers: rapid and reliable system and data recovery.

Terms of use | Copyright © 2017 Farlex, Inc. | Feedback | For webmasters