Printer Friendly

Backup and disaster recovery techniques: a look back and forward.

If you used backup and disaster recovery techniques from just five years ago in today's datacenter, you definitely wouldn't get the job done. If you tried techniques from ten years ago, you couldn't even get started. To begin, the average size of a large data center has grown from hundreds of gigabytes to hundreds of terabytes, and some have grown to over a petabyte. In addition, the number of applications has grown as well. Finally, recovery expectations have also increased. The storage industry and storage managers have done their best to keep up with the demand, but not every advancement has helped.

The basic defense ten or so years ago was locally attached tape drives and native backup and recovery tools. This proved to be quite difficult to manage, bringing in the advent of network-based backup programs, which have dominated the backup and recovery market for several years now. The basic design of such a system is one or more tape drives behind a backup server that backs up all other clients across the network.

While the network backup server solved a lot of problems, the growth in the average size of a datacenter created more problems. The first line of defense was the use of faster and larger tape drives. Tape drives ten years ago had native speeds and capacities in single digits: 2-5 MB/s and 10 GB. The latest tape drives have native speeds of 80+ MB/s and native capacities of 200+ GB. This is both good news and bad news. While these tape drives can handle much more data, a single tape drive is faster than what a single backup server can handle. After an average compression ratio of 1.5:1, an 80 MB/s tape drive becomes a 120 MB/s tape drive, and that is way beyond the capability of a Gigabyte Ethernet connection.

The first challenge with tape drives faster than a network connection is that bigger and faster tape drives won't allow us to back up larger systems across the network. This brings us to our next advancement: LAN-free backups. It's essentially a return to the locally attached tape drives of ten years ago, with the addition of centralized scheduling, reporting, and indexing, making it much easier to manage. And, unlike the locally attached tape drives of ten years ago, these tape drives are shared via the SAN.

The second challenge with such fast tape drives is that you cannot stream a modern tape drive that's placed behind a network backup server. It's been difficult to stream tape drives across the network for a while, but now it's impossible. You simply cannot stream a 120 MB/s tape drive with a 50-75 MB/s Ethernet connection. The answer to this has been disk-to-disk-to-tape (D2D2T) backups, where backups are first sent to disk, and then sent to tape for offsite storage. Several systems can be backed up simultaneously, resulting in several serialized images residing on a disk locally attached to backup server. These backups can then be easily copied, cloned, migrated or duplicated to tape for offsite storage. (The virtual tape cartridge [VTC] will soon allow even the offsite "tape" to actually be disk.)

To deal with the ever increasing complexity of today's datacenters, backup software products and application vendors have also created a bevy of special purpose agents to handle different scenarios. There are database agents, image-based agents to backup millions of files as one large image, block level incremental backup agents to increase the speed of the image-based agents, open-file agents, and a host of other similar agents. While each agent solves a particular problem, it also adds to the complexity of the backup system.

Backup systems have gotten faster and more reliable over the years, as the commercial backup hardware and software market has matured. However, there are some problems that traditional backup software simply cannot solve, starting with backups of remote sites. Remote backup systems are hard to manage, and proper off-site practices require a contract with a vaulting vendor for every remote site--a costly proposition. The second challenge is that some recovery time objectives (RTOs) and recovery point objectives (RPOs) are impossible to meet with traditional backup. For example, how would you use a traditional backup system to recover a 1TB system in fifteen minutes, without losing more than five minutes worth of data? Good luck. The final challenge with traditional backup systems is their complete inability to create consistency groups. That is, they cannot restore multiple systems to the same point in time--a basic requirement in all disaster recovery systems.

These challenges are why most DR planners have switched from tape or virtual tape backups to replication for DR purposes. Historically, only replication could meet the challenges of today's DR systems. However, in recent years, this is no longer the case. There are actually three advanced backup and recovery methods that can be used to perform operational recovery and disaster recovery with a single system.

Replication coupled with snapshots.

The first advanced method is replication coupled with snapshots. Snapshots provide the historical aspect needed for operational recovery, and replication provides the ability to get data offsite and make it available for immediate use without a restore. This is the most common of the three advanced methods, with hundreds of customers using it to provide on-site and off-site backups without moving tape anywhere. (Sometimes tapes are created off-site for longer-term storage, but these tapes can stay where they are, since they're already off-site.)

Object-based backup and delta-block incremental backup.

The second advanced method uses object-based backup and delta-block incremental backup. When a traditional backup system backs up a changed file, it backs up the entire file. Both of these types of systems backup only the new blocks that have changed in that file. An object based backup system saves even more space and bandwidth by backing up only files it has never seen before. If a file (e.g. COMMAND.COM) has been backed up on another system, it just stores a pointer to that backup. It's important to understand that both systems store their backups in such a way that they can restore data just as fast (if not faster) than a traditional backup system. Some can even present a mountable image that can be used for business continuity while you're restoring the production system.

Continuous Data Protection.

Finally, there are continuous data protection (CDP) systems that act like replication with a back up button. Like replication, a CDP system copies blocks to the backup system as soon as they're changed on the client. Where replication systems overwrite blocks on the destination device when they're changed on the source device, CDP systems store the data in a log that allows them to present any point in time for recovery purposes. CDP systems can perform fast recoveries by restoring just the blocks that have changed, and instant recoveries by presenting a mountable volume for BC purposes, just like snapshots and some object-based backup systems.

Over time, these new backup methods should be adopted as specialized agents for more traditional backup systems. This will bring the much needed benefits of centralized scheduling, reporting, and management to these wonderful new technologies.

In summary, traditional backup systems are being enhanced with D2D2T systems, LAN-free backups, and specialized agents. However, even these enhancements cannot meet some recovery requirements such as remote sites, aggressive RTOs and RPOs, and some DR requirements. Therefore, some customers are now meeting these requirements with snapshot/replication based backup, object-based backup, delta-block backups, and continuous data protection systems. Hopefully these advanced systems will be more readily available as the need for them becomes even more widespread.

W. Curtis Preston is vice president of Data Protection at GlassHouse Technologies, Inc. (Framingham, MA).

Opening shots in continuing stories ...
COPYRIGHT 2005 West World Productions, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2005, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:first in/first out
Author:Preston, W. Curtis
Publication:Computer Technology Review
Geographic Code:1USA
Date:Oct 1, 2005
Previous Article:SAS I/O performance: unprecedented flexibility in I/O bandwidth selection.
Next Article:Optimizing serial attached SCSI with PCI Express.

Related Articles
Web Warps Techniques And Technology For Continuous Off-site Data Protection.
Loss and recovery.
The evolving role of tape storage.
Changing approaches to data protection.
Simplifying disaster recovery solutions to protect your data.
Online data backup & recovery takes hold: outsourcing better addresses data protection.
Preparing for disaster with an effective business continuity strategy: overcoming potential dangers to your information infrastructure.
Plan for the worst, hope for the best: backup & disaster recovery, Part 2.
Overcoming recovery barriers: rapid and reliable system and data recovery.
Personal disaster recovery software: an essential part of business disaster recovery plans.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters