Printer Friendly

Virtualization: One Of The Major Trends In The Storage Industry -- What Are You Getting For Your Money?

The word "virtualization" has crept into the IT lexicon. What does it mean and how much of the claims are hype versus delivering real value to the business?

As storage growth continues to exceed 100% per year, and as heterogeneity proliferates, the complexity of managing IT infrastructures increases exponentially. The promise of virtualization is that it will significantly improve storage manageability. But unless it also delivers on cost containment for IT, virtualization is only delivering a solution to part of the problem.

This presentation will define virtualization, contrast the kinds of implementations being announced on an almost daily basis, and provide a basis to compare and evaluate various offerings.

Storage Growth, People, The Economy, Business, And IT Budgets

Industry analysts appear to be consistently predicting a 100% compound annual growth rate for storage. To put this into perspective, an organization with 1 terabyte of disk storage today, will have 32 terabytes five years from now. Many companies in the Global Fortune 1000 we speak to have much more than 1 terabyte of storage today, and are frankly very concerned about the prospect of having to deal with an infrastructure 32 times bigger than it is today.

The United States economy is said to be slowing. There is no evidence yet that this softening of the economy has spread to other worldwide geographies. However, the U.S. approach of turning over every IT dollar before spending it is catching on, and as storage costs become a bigger component of the overall IT capital budget--some say more than 50 percent--there is a very acute awareness of storage costs worldwide. Of course, if IT budgets were growing to meet increased expenditures there would be less of a problem, but the anecdotal evidence we hear is that IT budgets are flat or shrinking, not increasing.

There is a clear trend towards a more pragmatic approach to storage acquisitions. A pragmatism born out of the fact that "storage's share of the IT budget will quadruple over the next four years and this will cause profound changes in infrastructure, storage management strategies, and operations staff" (quote from Eurostorage web site, 2001). There is clearly less inclination to "throw more disk at the storage problem."

The first reaction of most CIOs to the prospect of storage infrastructure 32 times bigger than it is today is -- how will I be able to manage it with the people I have today? Given the lack of significant improvements in storage management productivity, the shortfall of 1,500,000 IT professionals worldwide, static or shrinking budgets, massive growth, the strategic nature of information in both competitive differentiation and in implementing e-business applications, and the need to simultaneously improve availability and scalability of storage infrastructure, it must seem impossible to CIOs to find a solution to all this chaos.

There is almost a nightmare element to the many variables that are coming together at one point in time. We could almost call this the Perfect (Storage) Storm. Ten years ago in 1991, in one of the rarest meteorological events of the century, three separate weather systems were on a "perfectly" aligned collision course. A Great Lakes storm system moving east, a Canadian cold front moving south, and Hurricane Grace moving northeast were all headed for the North Atlantic. Along the way, the storm would create monster seas, batter ships, and cause coastal flooding along the eastern U.S. seaboard. To IT professionals in 2001, this is the perfect storage storm.

It is not as though storage growth can be slowed down to match budgets, or the economy, or even to match the ability of human beings to deal with it more comfortably. Storage is not a faucet that can be turned off. Storage growth is driven by information flow, which in turn is driven by applications created to maintain or improve competitive positioning. E-business applications--from supply-chain management to customer relationship management and everything in between--are vital new non-discretionary elements of the post-internet business world.

So, what are the solutions to these problems? How do we survive the "perfect storage storm"?

New Architectures And Technologies

The two most discussed storage innovations of the last couple of years are storage networking and virtualization. Much more has been written about storage networks than has ever been implemented. Even so, both Storage Area Networks (SANs) and Network Attached Storage (NAS) will provide complementary solutions to the primary challenges of manageability, affordability, availability, and scalability, even while wrestling with the interoperability and standards issues they create. It is virtualization that we want to discuss in more detail.

Many vendors, large and small, are talking about virtualization. It is clearly positioned by the majority of them as a means to simplify management of large, complex, heterogeneous environments, with the clear implications that this virtualization will exist within a storage networking (typically SAN) environment. Right now, most of these announcements appear to generate more questions than they answer. What do they mean? What is being virtualized? Where is it being implemented? Is this virtualization or abstraction? Is this simply pooling of devices with a fancy name? How much of what is being announced is available today? Which server operating systems are supported today? It is reasonable to challenge many of the claims being made, but meanwhile, there is a need to clarify the "virtual" landscape.


The Robert Frances Group defines virtual as "...those architectures and products designed to emulate a physical device where the characteristics of the emulated device are mapped over another physical device." Another way to express this is to say that virtualization separates the presentation of storage to the server operating system from the actual physical devices. Neither of these statements implies an underlying architecture, yet, as stated earlier, most claims to storage virtualization today are made in the context of storage networks. This is no accident, since storage networking and storage virtualization are trying to solve the same fundamental problem -- storage manageability. In fact, it would be fair to say that, even as storage networking is only just beginning to be widely implemented, it is already recognized that we need to do something more to ease the storage management burden.

Virtualization implemented within a SAN contributes several things to the goal of easing storage management workloads. It hides complexity by simplifying the server's view of what devices exist. It masks change by enabling physical storage devices to be removed, upgraded, or changed without the need to tell the operating system via device drivers that the storage world is different now. It can magnify an administrator's productivity by pooling large amounts of storage and allowing that storage to be allocated across many servers. It can aggregate small amounts of storage across multiple devices and make it appear as a single large disk.

It could be argued that some of these things aren't even virtualization, but abstraction or emulation, or aggregation. However, the point is not to argue semantics but to stimulate a critical view of virtualization offerings so that intelligent choices can be made.

This paper proposes some fundamental definitions:

* We say that the purpose of storage virtualization is to enable better management and consolidation of storage resources.

* We say that on the continuum between the application and the data, there are multiple points at which virtualization of storage can occur.

* We say that those points are the host, the network, and the storage device.

* We say that each point delivers advantages that are unique to that point, and that some things are done better at certain points.

The What And The Where...And Pros And Cons

In attempting to understand and differentiate multiple implementations of storage virtualization, we can begin by defining the things that are being virtualized--the "what"--and the place this virtualization is being implemented--the "where" or the instantiation. Since the primary purpose of storage virtualization is to enable better management of storage resources, the "what" is typically tape and/or disk.

The vast majority of recent storage virtualization architectures announced by many vendors are designed to be implemented within the context of a storage network--therefore the "where" is the server, the network, or the storage device.

There is a third element of virtualization in addition to the "what" and the "where". This is called "span-of-control". For example, if virtualization software is implemented in the server, then logical or virtual storage presentation is implemented there, but it is mapped to storage that exists beyond the server. Therefore, span-of-control extends beyond the platform where the virtualization is implemented.

There is a degree of predictability in virtualization implementation depending upon the core competency of the vendor. For example, it is predictable that a server vendor will most likely implement storage virtualization at the server level. It is equally likely that a software vendor will implement virtualization on a server platform. Typically in these implementations, virtualization--presentation services--is done in the server, and is mapped to external storage. There is no control over external storage devices other than allocation. Some questions to ask of vendors implementing virtualization in the server:

* Is software required on every server participating in the storage network?

* At what point does server I/O bandwidth impact virtualization effectiveness and performance?

* Is there a maximum amount of storage supported in this storage network? If so, what is it?

* What kinds of storage devices are supported?

* Can they be any vendor's storage devices?

* Is there any kind of policy-based management capability available or planned?

* Will this solution support serverless backup and/or migration?

Network vendors are not necessarily only going to implement virtualization in a network device, but it is likely. If it is actually a special purpose server, then it is server-centric, not network-centric. The definition of a network device for the purposes of this paper is a platform that provides both the switch ports as well as the means to implement virtualization--a kind of hybrid domain manager or an intelligent router or an intelligent switch. Presentation services are done in the network, and the logical devices are mapped to external storage devices. There is no control over external storage devices other than allocation. Questions to ask of vendors implementing virtualization in the network:

* What servers, operating systems and applications are supported at the server level?

* What kinds of storage devices are supported?

* Can they be any vendor's storage devices?

* What I/O bandwidth limitations are there?

* Is there a maximum amount of storage supported in this storage network? If so, what is it?

* Will this implementation support serverless backup and/or migration?

The third alternative for the "where" of storage virtualization is in the storage itself. This is an interesting implementation. If virtualization is done here, and the vendor is a storage vendor, then there are some challenges to avoid limiting the storage devices to just those supplied by the vendor. The storage vendor implementing storage virtualization might form a strategic alliance with a server vendor, a software vendor, or a network vendor to avoid creating a proprietary lock-in. But what makes this an interesting implementation is not the "what" necessarily, or the "where" at all, but the "span-of-control." When storage virtualization is implemented at the device level, there is an opportunity to have both the logical (virtual) environment and the physical devices within a common "span-of-control". Exploitation of this span-of-control--meaning management control of both the logical presentation services and the physical resources needed to satisfy the storage demand--could lead to capacity and operat ional efficiencies unavailable to virtualization implementations where the physical storage devices are external to the virtualization "engines" span-of-control.

In fact, there are today three implementations of device-level virtualization where logical devices and physical devices exist within the span-of-control of the virtualization engine. These are IBM's virtual tape server; StorageTek's shared virtual array 9500 virtual disk system; and StorageTek's virtual (tape) storage manager. Sutmyn Storage Corporation's virtual tape server is not listed here because the physical tape devices lie outside the virtualization engine's span-of-control.

For the purposes of this discussion, the benefits accruing to the fact that the span-of-control encompasses both the logical (virtual) devices and the physical devices, are very large efficiencies in capacity utilization in the case of the virtual disk, and very large efficiencies in tape media utilization in the case of virtual tape.

Questions to ask vendors of virtualization solutions implemented within the storage device:

* Will this solution support other vendor storage devices?

* Are there I/O bandwidth limitations?

* Are there processor bandwidth limitations?

* What server platforms and operating systems are supported?

* What storage capacity is supported?

* Are there benefits in capacity efficiencies?

* Are there any storage management functions carried out within the engine? If so, what functions?


Storage virtualization will join storage networking as one of the two storage revolutions of the new millennium. Storage virtualization is necessary to overcome some of the limitations of storage networking, as well as in its own right for providing two huge benefits to IT organizations.

1. Significantly improved storage manageability.

2. Significantly reduced storage infrastructure cost.

The first benefit is derived from all of the virtualization architectures and implementations. However, the second is only obtained if virtualization is implemented at the device level, and if the virtualization engine is designed to exploit the fact that both the logical devices and the physical devices exist within the same span-of-control.

How ready are you to face the storage storm?

Rob Nieboer is the senior manager, industry analyst relations at StorageTek (Louisville, CO).
COPYRIGHT 2001 West World Productions, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2001, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Industry Trend or Event
Publication:Computer Technology Review
Geographic Code:1USA
Date:Apr 1, 2001
Previous Article:Protect Network Security Proactively.
Next Article:New Strategies In Performance Storage Solutions.

Related Articles
Storage Virtualization: Its Changing Definition.
Virtual Worlds: Virtualization Layers In The SAN.
Enter the cube, exit the connector: IBM introduces a brick of a concept for storage.
Storage area networking and data security: choosing the right solution for maximizing storage and integrity. (SAN/NAS Backup).
Reflections on Storage Virtualization.
EMC to develop software for McDATA's intelligent switch platform.
Trends in virtualization focusing on solutions, not technologies.
Managing and scaling IP SAN.
2005 storage year in review.
NAS virtualization ready to double in 2006: NAS and NAS virtualization survey results.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters