Printer Friendly

Storage virtualization--questions for potential adopters Part 3 of 3.

In the preceding parts of this series, we discussed the value of adopting storage virtualization--how it offers a new, non-disruptive operating capability that facilitates change management in today's dynamic storage environments, and we examined the architectural distinctions of various approaches to implementing SAN-wide storage virtualization. In this final part of the series, we turn to some additional practical considerations and questions for potential adopters of this technology.

Thinking Practically

The first question any adopter of storage virtualization (or any new technology for that matter) should ask is very simple--is the pain I am experiencing and hope to alleviate with this new solution significant enough to warrant the additional effort in deploying it? Asking basic questions such as "do my existing, proven solutions work well enough today?" and "are there better, potentially easier or more proven solutions to my problem?" will cut to the heart of the matter.

As an example, we can't help but see the marketing of in-band storage virtualization solutions to small environments (as small as 2TB) as failing this test. Environments of this size typically have neither the vendor diversity nor the amount of change that virtualization solutions are designed to address--the pressing need is just not there. Existing solutions like today's storage arrays or storage management software offer a much more economical, proven and less complex solution for this type of environment.

Before investing any amount of effort in evaluation, make sure virtualization will be implemented and seen as a practical innovation--something which solves an immediately pressing, unsolved problem and does not create new problems (e.g., more complexity) in the process.

Four Criteria for Evaluating Virtualization Solutions

For those environments where storage virtualization is practical, there are four fundamental challenges that must be addressed by vendors and considered by potential adopters: scale, functionality, management and support. Let's look at each in turn.

Scale

In today's existing SAN environment, performance is distributed across multiple storage arrays. Each array is independent of every other array. In a virtualized environment, storage performance is aggregated from across the infrastructure. It is this ability to aggregate which underpins the management simplification benefit of a virtualized environment. Therefore, much of storage virtualization's value comes from its ability to scale--maximum value is achieved when the whole target environment can be aggregated into a single logical view or "virtual pool."

With this in mind, there are three key questions adopters should ask their storage virtualization vendor about scale. The most basic are how big can you scale and how does this compare to the target environment. What adopters should look to avoid is a product which will simply create multiple "islands" of virtualization because it cannot scale to a reasonable size. The added complexity of managing multiple virtualization instances (as well as the cost of deploying them) is likely to outweigh any potential benefits in this case.

A related question is performance at scale, specifically, how does performance change when a reasonable amount of storage is being virtualized? Two metrics: response time (related largely to the system's processing capabilities) and throughput (related to the system's bandwidth) are key measures here. Many in-band systems, which route all I/O through a finite pool of cache, general-purpose processing and bandwidth have widely divergent performance characteristics as the load grows and these resources are depleted. Systems with distributed, purpose-built processing tend to be less prone to this type of degradation.

A final question on scale is more architectural--how is the system scaled? A product that scales easier is more likely to be able to accommodate future growth in the environment, a consideration which is almost a certainty for most customers. As we have discussed in a previous article, in-band products generally employ a "scale up" strategy which require the addition of a new "box" or unit when the limits of cache or processing are reached. Unfortunately, these new boxes tend to be part of a separate management domain, adding more complexity. Depending on the size and specs of the box, they may also add significant expense.

Virtualization architectures based on distributed intelligent processors are generally more amenable to "scale out" strategies which enable more incremental increases in scale at more incremental cost points while maintaining the same management domain.

Functionality

Today, applications storing data on the SAN have access to rich array-based software functionality, such as local and remote replication. By aggregating and abstracting the storage capacity, virtualization solutions mask the individual devices, breaking the host-to-device relationship that the array-based software needs to function. Thus, in order not to subtract value and deliver a less functional environment, the virtualization solution must either replace the value-added functionality provided by the arrays or interoperate with and preserve this existing functionality. An ideal solution will not offer an either/or proposition, but provide both options. Virtualization architectures which hold state (e.g., in-band architectures which employ cache) by design cannot interoperate with existing array-based functions like remote copy services that also manage state changes.

The next key questions to ask to a virtualization vendor are what happens to my existing array-based functionality? Can you keep making use of your existing investments in processes, skills, training, and people that use these tools? If the answer is no, the total replacement cost incorporates all these factors must be carefully calculated.

Aside from cost, the "replacement" functionality must be benchmarked against the existing solutions. Are all of the key features present? Market-leading replication software benefits from the improvements and refinements of more than 10 years of continued software engineering, a tough standard for any new product to measure against.

Management

A key advantage of today's storage resource management (SRM) tools is they provide an end-to-end view that integrates everything in the environment. If you want monitoring, reporting, planning and provisioning services provided efficiently and effectively for your storage environment, SRM is absolutely necessary. Virtualization devices affect SRM or any other "end-to-end-view" management tool. Introduction of a virtualization device breaks the end-to-end view into three distinct domains: the server to the virtualization device, the virtualization device to the physical storage and the virtualization device itself. The re-integration of the management view is essential to achieve the manageability benefits of a virtualized environment.

The key questions for a virtualization vendor around management are: can I manage my newly virtualized environment with my existing SRM toolset? Again, preserving investments in processes, skills, training and people. Secondly, do I have an integrated view of virtualized and non-virtualized environments? This enables the consistent application of management discipline and policies over the whole environment. Inability to deliver on these points will negate many of the promised manageability benefits of virtualization, especially as transition to a completely virtual environment takes place.

Support

Virtualization is not a standalone technology. It is something that has to work within an existing environment. The virtualization device is a new platform with new intelligence and it has to interact with everything you already have including servers and server-side software, storage networks, networking hardware and network protocols, and storage arrays and array-resident software. For example, think about what it took to make an "industry-standard" protocol such as Fibre Channel work as advertised. Interoperability and support will be key to the success of any virtualization solution.

A number of support-related questions thus arise for the potential virtualization adopter: Who is responsible for new hardware-qualification requirements? How do any issues or problems get escalated and resolved? Who is responsible for service and support ownership? Fortunately, we can answer these questions in this article. The answer is the virtualization provider.

Thus, we need to ask some additional questions to evaluate how well prepared a vendor is to handle this support challenge. Specifically, how much experience do they have in performing sophisticated interoperability testing and support of complex multi-vendor environments? What is their reputation in the industry for quality of qualification and interoperability testing? And perhaps most importantly, as tangible proof, how much have they invested in this capability? Many vendors pay "lip service" to this area, but confidence in their ability to stand behind a "qualified" configuration should weigh heavily on the decision to implement their virtualization solution in a mission-critical production environment.

Conclusion

In this article and throughout this series of articles, we have seen that there are many issues and considerations involved in understanding and deploying storage virtualization technology. For those who have the need, understand the pros and cons of the various architectural approaches and carefully choose the right solution through consideration of scale, functionality, manageability and support, they will experience a transformation of their storage infrastructure, one that makes their environment more dynamic, continuously available and ultimately, better able to deliver on the needs of the business.

Mark Lewis is executive vice president and chief development officer at EMC Corporation (Hopkinton, MA)

www.emc.com
COPYRIGHT 2005 West World Productions, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2005, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Storage Management
Author:Lewis, Mark
Publication:Computer Technology Review
Geographic Code:1USA
Date:Aug 1, 2005
Words:1461
Previous Article:CDP: the next generation of backup.
Next Article:Evaluating high-availability solutions for Microsoft Exchange.
Topics:


Related Articles
The Age Of Virtualization.
Virtualization: One Of The Major Trends In The Storage Industry -- What Are You Getting For Your Money?
Getting [Virtual] Religion.
Storage area networking and data security: choosing the right solution for maximizing storage and integrity. (SAN/NAS Backup).
No quick virtualization fixes: achieve the goals of virtualization through holistic storage management.
Trends in virtualization focusing on solutions, not technologies.
Beyond storage consolidation: the benefits of iSCSI SANs.
Intelligent switches--moving beyond virtualization.
A new frontier: virtualization is beginning to find its place in the insurance industry.
NAS virtualization simplifies file storage management.

Terms of use | Copyright © 2017 Farlex, Inc. | Feedback | For webmasters