Printer Friendly

Disaster recovery for the masses: the role of OS-level server virtualization in disaster recovery.

Disaster recovery is a hot topic again. Events like hurricane Katrina tend to make organizations reflect on their vulnerabilities--including asking their IT departments about that disaster recovery project started (but never quite completed) back in 2002.

Disaster recovery planning is quite a complex undertaking. It includes considerations such evaluating each application and data source to determine both a Recovery Point Objective (RPO) and a Recovery Time Objective (RTO). RPO is the amount of data that can be lost before it affects the organization, and RTO is the time it takes to recover and restart the server and application. Together, these provide guidelines for how current data must be and how fast it must be recovered. In actuality, most organizations discover that only a very few (if any) applications require an always available, 0 RPO/RTO rating. According to a Gartner survey in 2005, only 18 percent of businesses implement transaction level replication to activate 100 percent uptime on any applications. Clearly these mission critical, always available applications are only a subset of the total applications in these businesses and the overall total percentage of applications treated in this manner would be much smaller.

Once the data has been classified, it becomes even more difficult to deal with as different solutions and extremely high cost software and infrastructure are involved in maintaining and managing all of the components. Faced with these complexities and costs, many businesses decide that comprehensive disaster recovery is too daunting, and instead opt for minimal disaster recovery planning and implementation that covers only the most critical of applications and data.

Protecting the Business--Without Going Out Of Business

Traditionally there are three approaches to disaster recovery:

* Always Available configurations (RPO/RTO 0) which include expensive duplicate servers (often in different data centers) and replicated content. Traffic latency and slowdown is often an issue with replication over distances, so many organizations that make this heavy investment opt for a Storage Area Network (SAN) connecting to a Wide Area Network (WAN) connection to serve this traffic, further increasing the costs.

* Fast Recovery configurations (RPO/RTO 1-12 hours) which include standby hardware that has data replicated over an acceptable RPO interval and may be activated in the required RTO.

* Backup Recovery configurations (RPO/RTO > 12 hours) are simply recovering a server from the last available backup copy. The backup recovery option is by far the slowest for recovery and least efficient.

Common disaster recovery solutions today tend to provide for the always available, full disaster recovery scenario. Most organizations don't require this type of availability, and if they do, it is on a very limited amount of servers or applications.

That said, most IT organizations recognize that some level of disaster recovery planning and implementation is a requirement, even if their business doesn't happen to be located in Hurricane Alley. According to a Gartner study regarding unplanned downtime, true disasters only make up a small component of environmental failures, which are in themselves a small subset of all unplanned downtime.

All businesses are constantly at risk of outages and disaster recovery isn't just important for a catastrophic data center loss. Each server and application should be protected. The problem then becomes the options available for disaster recovery planning. Available options tend to be expensive and exceed the RPO and RTO goals of the organization for a given application or server. The search for a cost effective solution with acceptable recovery times becomes the focus for most organizations.

Disaster Recovery with OS-level Virtualization

Operating system-level server virtualization solutions (such as SWsoft's Virtuozzo) hold an answer. These solutions utilize a different architecture and offer lower overhead than typical virtualization technologies, creating isolated and secure virtual servers on a standard Linux or Windows operating system on a single physical server. For this reason, they are an ideal component of disaster recovery solutions, providing both the lowest cost and the highest density virtualized servers available.

Many supporting technologies required for a disaster recovery solution work well in conjunction with OS-level virtualization. The virtualization software manages the virtual infrastructure and provides many flexible options and capabilities that support a disaster recovery environment and address the RTO component of the solution or plan. The next step is managing the data from the original server. There are many available solutions and capabilities that reflect different levels of RPOs.

What are the basic components to a disaster recovery solution?

* The originating server/application/data.

* The technology to replicate or backup the application and data.

* The ability to recover or failover the server.

OS-level Virtualization and Always Available Configurations: OS-level virtualization can be a crucial part of an always available disaster recovery solution. The originating server, housed on a SAN, may use the SAN to replicate to a virtualized server. Once a server fails, the SAN will re-reroute all traffic to the still available virtualized server. While "always available" remains a subset of disaster recovery solutions only for mission critical applications, this approach can bring the recovery server cost component considerably lower.

OS-level Virtualization and Fast Recovery Configurations: The fast recovery solutions are gaining the most attention now as companies determine that they do have the ability to take some data and time loss in their systems. OS-level server virtualization is really ideal for fast recovery solutions because the virtualized servers reside on top of a running operating system. The footprint for the virtualized server is small, so its recovery time is the amount of time to load the application and data into memory, with no time at all required for starting the OS.

SMS Central, a premium mobile solutions provider, has actually implemented a fast recovery solution based on this model. SMS Central has several locations with disparate servers. They made low cost replicated virtualized servers of each, with five virtualized servers running on a single physical server. One of the original application servers was a MySql server that utilizes the virtualization application's replication capabilities to maintain the data integrity over the prescribed RPO. In the case of an event, some of their more critical applications are kept operational to keep the data loaded in memory, while other less critical ones are not started, or only started when necessary. This enables simple management and the maintained level of adequate system resources on the recovery server. This disaster planning scenario also netted the company an 85 percent reduction in new server infrastructure costs, as an added benefit of virtualization.

In a fast recovery server configuration, the operating system runs continuously with minimal cost. Since many virtualized servers can reside on the single server, redundant servers become extremely cost effective. If the virtualized servers are not operational, the amount of standby servers can far exceed the server resources that would be required to support an operating server.

Replication technologies are becoming more accessible as applications themselves are trying to meet organizational requirements for disaster recovery. Databases are a typical example. Databases provide their own replication capabilities. Data replication intervals can be set according to the appropriate RPO objective and replicated to the virtualized instance of the server. Again, the virtualized server doesn't actually need to be running and using computing resources, enabling many applications and data sets to be maintained on a single physical server.

As for the components of the complete solution, the originating server is on the same network as the virtualized server. Either SAN or application replication is deployed to maintain the data integrity within the desired RPO, and finally the virtualized server can be activated very quickly to make the service and application available. The best OS-level server virtualization solutions also provide very extensive and complete network configuration capabilities that help with the complexities involved in creating and duplicating servers on a network.

OS-level Virtualization and Backup Recovery Configurations: Most organizations don't even consider backup and recovery as a disaster recovery solution, but for applications and data of extremely low importance, backup and recovery may be the least expensive approach to guaranteeing the final range of low criticality servers. Basic virtualization technology ensures that a server can be configured and deployed in seconds anywhere, on any physical server. A backup can then be restored from any media into the virtualized server.

More to Think About....

A server may be "down" if it is unavailable due to high traffic levels or any number of complications. When evaluating server virtualization in the context of disaster recovery, IT departments should look for capabilities that make a regular application server more flexible and more able to deal with an impending outage or disaster. An example of this is zero-downtime migration. Virtualization technologies remove the server and application from the complexities of the underlying hardware. Advanced solutions enable servers to be moved from any two networked servers (no SAN required) with zero-downtime migration. It is important that any server, any application can be moved without interruption to users or service. The same goes for flexible resource management. Virtualized applications should not be hindered in any way in this area. Resources must be allowed to be added or reduced in real-time without service interruption, and overloaded applications must quickly and easily be provided with more resources.

The events of 2005 forced a renewed focus on disaster planning. At the same time, the general mandate continues to be "get more with less" out of overburdened IT resources. OS-level server virtualization plays an important, and financially attractive, role in the process of preparing for--and functioning through--disaster scenarios.

Carla Safigan, Virtuozzo product manager, SWsoft (Herndon, VA).

www.swsoft.com
COPYRIGHT 2006 West World Productions, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2006, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Disaster Recovery & Backup/Restore
Author:Safigan, Carla
Publication:Computer Technology Review
Date:Jan 1, 2006
Words:1583
Previous Article:Intelligent switching for your VoIP network.
Next Article:PCI Express: connectivity for the next generation.
Topics:


Related Articles
Simplifying storage: how companies benefit with a backup appliance approach. (SAN).
The inevitable paradigm shift: disk-to-disk-to-tape. (Tape/Disk/Optical Storage).
TCO should include value as well as cost.
Overcoming recovery barriers: rapid and reliable system and data recovery.
Protecting Microsoft Exchange Server in SMBs.
Data storage sticker shock: the need to recalculate data storage TCO.
Peace of mind: disaster recovery plans can keep your business alive.
Personal disaster recovery software: an essential part of business disaster recovery plans.
Playing Russian roulette with your business data: the importance of disaster recovery planning.
Planning for resiliency: why regular testing of your business continuity and availability plan is more critical than ever.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters