Printer Friendly

Used to be hype, but now virtualization is for real.

Storage virtualization has gotten a lot of press in the past several years, not all of it positive. It's not that virtualization isn't an extremely useful technology--it is. And it's not that virtualization hasn't been around for a long time--it has. But mention storage virtualization today, and you'll find that many potential customers are confused and resistant.

Why the industry backlash? It's tempting to lay the problem on the backs of marketing departments. So, borrowing a page from Oscar Wilde, let's avoid temptation by giving in to it. The original marketing materials on virtualization were glowing and promised comprehensive relief for system management headaches. Multiple storage devices were supposed to easily round up into a simple and easy-to-manage pool of storage, which in turn would free up administrators to play golf ... all right, that's overstating the case--it would free up administrators to do sophisticated planning and really cool new things because they'd no longer be banging their heads on their desks after trying to manually provision all of those storage devices.

To be fair to the marketing departments, the storage virtualization vendors were trying to keep up with the Joneses. The marketing staff was under serious pressure from the senior execs to sell new virtualization-based products and to sell them big. The press complicated matters by jumping on the virtualization bandwagon and further confusing everyone, largely because few of the journalists actually understood it.

This barrage of marketing and press coverage, along with the lack of point products, left many customers feeling shell-shocked. That's understandable but unfortunate, since virtualization really can help to manage real-world storage situations. These same resistant customers are slamming in storage devices left and right, and are trying to manage them under-impossible circumstances. Storage islands abound: SANs with multiple heterogeneous servers and arrays, data networks with mirrored RAID devices, endless departmental NAS devices, tape libraries, optical storage and the still ubiquitous direct-attached storage. Virtualization services can't make these problems go away overnight--which is what some people were expecting--but they can considerably ease management pain. That's because virtualization is an underlying technology for lots of other good things, including storage resource management (SRM), automatic provisioning, and volume management services.

Virtualization takes the tangled spaghetti of physical storage--the disks, the arrays, the tape libraries, the controllers, the switches, the hosts--and makes it look like a few easy-to-manage storage devices. This is called a logical environment. But, there are a few important caveats:

No virtualization product manages a whole network all by itself. There is no magic-pill virtualization package that will simply pull together all of a company's storage devices and magically make them appear as a single storage pool.

Virtualization covers a variety of services, not just storage pooling (for example, it makes hot-swapping possible by fooling the array controller into thinking that new disk has always been there). It can also improve performance and availability by allowing an array controller to stripe and mirror, or allowing a host to manage its I/O.

Some virtualization services do extend across networks, but must wait for widespread standards adoption before they can offer sophisticated virtualization services. Virtualization that's based on single hosts or arrays is quite sophisticated within its own territory, but can't extend beyond it.

In spite of the caveats, and in spite of the hype, misunderstandings and general all-around ill will, virtualization is still the single most important technique for hot-swap availability, improved I/O performance and storage pooling compared with anything else out there.

Virtualization Happens

Virtualization in a data network has been around a long, long time. But storage virtualization is a bigger fish to fry, because storage devices largely lack common standards and interlaces. This has led to trade-offs in virtualization features: the narrower the virtualization, the more sophisticated and robust the services; the more widespread, the fewer the services.

Vendors place virtualization in several different environments, depending on the services they want to offer. These locations include hosts, arrays, network appliances and switches:

Host server. A host server dedicates its virtualization operations to the applications that it supports. The virtualization software can easily monitor application service levels and stored data amounts and details, making it extremely useful to application-centric storage management (storage management centered on the needs and service levels of specific applications). Host servers can only see and virtualize the storage devices in sight and under its control.

Arrays, disk or tape. Also called storage subsystem virtualization, array-based brands virtualize single arrays or multiple networked arrays by the same manufacturer. The array controller usually hosts the virtualization software, which intercepts users' data requests and maps them to the actual physical location of the requested data. Array-based virtualization can also automate data replication and migration and help an array to stripe and mirror across its disks. Its obvious disadvantage is that army-based virtualization doesn't work over different types of arrays, though that may change given upcoming storage standards. John Joseph, vice president of marketing at EqualLogic, commented that virtualization has been sadly lacking because the hardware components couldn't talk to each other. "The software virtualization companies couldn't deliver on their promise because the hardware companies couldn't sing from the same hymnal."

Network appliances and switches. Intelligent switches and virtualization appliances host virtualization in the network instead of on single host platforms or arrays. This approach to virtualization concentrates on easing data movement like replication and migration operations, and is also good at identifying system bottlenecks. Some can also reallocate bandwidth according to application rules. This is the single best approach to handle heterogeneous devices, and will get more comprehensive as more devices use common standards like SNIA's SMI-S or the T11 Committee's T11.5 (the "intelligent switch API").

Although similar virtualization products compete with each other, competition between the different base camps is disappearing as virtualization vendors preach the gospel of integration. EMC, which provides sophisticated array-based virtualization services for its own hardware, is working closely with switch vendors to develop virtualization and volume management software for intelligent switches. It's counting on the T11.5 standard to let array-based virtualization services seamlessly interact with heterogeneous, intelligent switch networks from makers like Cisco, McData and Brocade. By combining the strength of array-based and switch-based virtualization, partner companies can manage at both the array level (storage pooling, automated provisioning) and the network level (allocating bandwidth, data movement where virtualization allows data to replicate to a cheaper non-identical array).

However, EMC'S Paul Ross, director of Storage Networks Marketing, pointed out that nothing is for free. Switch-based virtualization holds promise, but it will be a first-generation intelligence. If a company is moving from a mature host- or array-based intelligence, they will have to give up some features they take for granted. To mitigate that problem, the switch companies are generally partnering with the storage software developers to add intelligence at the switch level. They're sticking with what they do best, making the switch hardware and microcode while the EMCs and Veritas' of the world write the software that goes above it.

Augie Gonzalez, director of product marketing at DataCore, champions the cause of network appliance-based virtualization. DataCore's SANsymphony pools storage from multi-vendor storage devices and operating systems. He said, "Virtualization does not define a product, it defines a technique. Lots of people have virtualization techniques, but they might be doing tons of different things to do it. Because they have virtualization, that doesn't make it suitable for an advanced service like network managed volumes."

SANsymphony works by granting applications the largest virtual disk that the operating system can handle, commonly 2TB. The software measures activity to the virtual and physical volume, which might be a fraction of the virtual volume size--maybe less than 1GB of actual space. As the application happily chews up the physical capacity--all the while thinking it has 2 whole terabytes of space--the virtualization appliance assigns more physical space to the logical volume. As the application grows to a defined threshold--say 50-60%--IT knows it's time to reprovision for the larger data store.

EqualLogic makes an array-based virtualization tool for iSCSI storage networks. The virtualization engine, which is based in the array controller, pools storage and manages RAID environments across multiple EqualLogic arrays. When the array reaches a capacity limit it notifies the administrator to either move the volume to a larger array (assisted provisioning), or to allow the array to grow that capacity (automated provisioning). As with Fibre Channel network administrators, iSCSI customers are very sensitive to the need to control data location they are often nervous that a virtualization feature, by presenting a logical instead of an actual physical view, means they won't be able to locate failure points. Virtualization products that present logical storage views must have the capacity to allow administrators to find and fix physical failures.

Veritas is a good example of a pure virtualization software play. Their virtualization operations work both at the host and network levels. Marry Ward, Veritas' director of product marketing, pointed out that as a software provider, Veritas software runs on the processor. That processor can be located in a number of different locations-array, host or network. "We don't care where it runs, and we'll let the customer dictate where they're going to run virtualization." Having said that, some placements and alliances work better than others.

For example, virtualization in a switch layer can't manage applications well--it's fast, but treats all data the same way. (There are switch start-ups that claim to allocate bandwidth by application, but it is too early to tell if that will fly.) Host-based virtualization handles applications very well because the applications' I/O paths run right through the storage manager. Now take this host environment, add intelligent switches into the mix, and you've got virtualization that understands the application from the host level and can also can talk to multiple storage arrays.

Complementary virtualization can make mirroring and multiple paths easier to establish and manage in open system environments. For example, a virtualization service sitting on an array controller can mirror and stripe across arrays, which improves availability and performance. Meanwhile the host level virtualization does dynamic multi-pathing for its various applications, establishing multiple paths into the application through the array.

XIOtech's chief architect, Rob Peglar, believes that only integrated virtualization services will allow fully automated provisioning. "The thing about virtualization is we don't think you can get to a fully automated state without a very strong dose of virtualization at many different layers." He doesn't mean just array, host or switch-based virtualization services, but services located at many different layers in the stack. Xiotech, which has just released distributed array controllers, believes that the future of virtualization is to cluster controllers to increase the size of virtual arrays. "In order to reach high levels of resilience and scalability, you have to use virtualization because without it it's just too complicated. Virtualization can take away a lot of that complexity. It's a key underlying technology."

Storage virtualization is improving, and will continue to grow as a vital enabling technology for storage management and automation. Storage standards like SMS-S and T11.5 will help immensely by standardizing hardware, which will in turn make virtualization much simpler and wide-ranging. Existing virtualization products are already simplify-storage management, and as standards tighten up vendors will introduce more automation products that run on top of a virtualized base. This will further simplify storage management by reducing or eliminating many manual tasks.

Gartner estimates that only 20% percent of storage costs go to storage hardware purchases. Managing storage takes 15%, backup and restore operations 30%, and downtime--including planned downtime when adding storage--is an unhealthy 20%. Virtualization can save a company some serious management and provisioning expenses, making virtualization services a good bet that's getting better.
COPYRIGHT 2003 West World Productions, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2003, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Author:Chudnow, Christine Taylor
Publication:Computer Technology Review
Date:Oct 1, 2003
Previous Article:Sell storage solutions, not storage: that's where the value is. And the profit.
Next Article:Arkeia protects Microsoft Exchange servers: new online backup plug-in offers multiple security levels for high volumes of mail data.

Related Articles
Virtualization Makes Illusion Real.
The Age Of Virtualization.
Storage Virtualization: Its Changing Definition.
Virtualization: One Of The Major Trends In The Storage Industry -- What Are You Getting For Your Money?
Virtual storage and real confusion: a big disconnect between what vendors offer and what users want.
Trends in virtualization focusing on solutions, not technologies.
Virtual storage equals real confusion.
Hardware or OS virtualization software for servers? Server virtualization: the capability of partitioning a physical server into smaller virtual...
A new frontier: virtualization is beginning to find its place in the insurance industry.
Real world virtualization: realizing the business benefits of application and server virtualization; Dealing with virtualization challenges.

Terms of use | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters