Printer Friendly

Storage virtualization, Part 1 of 3: delivering non-disruptive operations, flexibility and simplified management.

Today, the industry is abuzz with talk of storage virtualization. Expectations are being created around universal management of multi-vendor storage and various other promises to provide the answers to all of today's storage challenges. In an attempt to deliver on these expectations, there has been a lot of confusion, debate and rushed-to-market solutions.

Much of the discussion around storage virtualization centers on increasing the utilization of storage assets. Today's storage resource management tools allow users to monitor their entire storage infrastructure and proactively manage its utilization. While low utilization rates were deemed the unfortunate results of massive infrastructure build-outs during the post-Y2K bubble and maintaining high utilization remains an important best practice, IT managers have made tremendous progress in this area in the last few years. Asset utilization is a less pressing problem today and one for which many good solutions already exist.

Meanwhile, the data explosion has continued apace. The new view of data sees the data's value fluctuating, mandating different locations and different types of containers throughout its life. This gives a whole new meaning to the concept of data availability. For most organizations it means constructing complex IT infrastructures that can deliver the right information to the right place at the right time, 24 X 7. Today's businesses require continuous information availability and are unwilling and unable to endure large amounts of infrastructure downtime when delivering on the needs of the business.

Downtime costs can run into the hundreds of thousands or even millions of dollars per hour. From running a physically contained data center, we are rapidly moving to running IT operations as if they were a permanently earth-orbiting space station with in-flight fueling, maintenance and upgrades. IT managers have come to expect better tools for keeping their data continuously available.

Ten years ago, we asked, "what if we eliminated unplanned downtime?" Technologies such as application clustering and data replication tools such as EMC's SRDF have since succeeded in protecting data from outages in the case of hardware failures, disasters and other unpredictable events. With the right tools in place and with high levels of availability designed into new IT infrastructure projects, particularly for mission-critical applications, many data centers have succeeded in drastically minimizing their unplanned downtime.

But with the rapid increase in complexity of IT operations, the demand is also increasing for greater flexibility and responsiveness. IT has become critical in reducing delays in business processes and applications are now expected to be available 24 X 7 to more people than ever before, in more locations around the world.

So, today the question is, "what if we eliminated planned downtime?" Maintenance, application upgrades, physical changes to the infrastructure, data center migrations--they all reduce the time an application can be up and running. IT managers report that planned downtime accounts for between two-thirds and three-fourths of total downtime.

Specifically, IT managers who have networked their storage in recent years are now looking to:

* Move production data non-disruptively across the storage infrastructure without ever taking applications down. For instance, when introducing a new storage array into an existing SAN, a storage administrator would need to schedule this at a suitable time and ensure application owners are aware that data will be unavailable for a period of time. IT managers want to eliminate all storage-related downtime for any reason whatsoever.

* Centralize capacity allocation, provisioning and data movement capabilities in order to provide for more flexibility in multi-tiered, multi-vendor storage environments. To execute on ILM, for instance, if a volume in a medium performance storage pool is not meeting the specific service level guaranteed to the application owner, the storage administrator moves the data to a higher performing pool (most likely on a different tier of storage) to meet the requirement. In the future, we can envision this process being more automated based on policies set by the application owner.

More often than not, storage administrators today are handcuffed from making changes within the storage infrastructure. The business' intolerance for downtime, as well as the upstream impact on server and application administrators, stands in the way of everything from fine-tuning to much larger-scale moves and changes. If properly deployed, storage virtualization can give administrators the flexibility they need to make changes to the underlying infrastructure without impacting systems or applications.

These "Non-Disruptive Operations" mask the complexity of the IT infrastructure from the business user. Delivering this capability into today' storage infrastructures is the practical use case that will spur the broad acceptance and adoption of storage virtualization technology. But to be successful in achieving this new level of Non-Disruptive Operations, users must choose a storage virtualization solution that will not increase the complexity and cost of their storage infrastructure, while also retaining ease-of-deployment and protecting their existing investments in storage functionality.

Finally, virtualization can simplify the management paradigm by increasing flexibility within the infrastructure and enabling IT to manage the infrastructure more independently from application or line of business involvement. While, for example, each storage array and each server has its own methodology for configuring volumes, over time storage virtualization will provide storage administrators a single and independent way to perform these functions in a unified way from within the network.

Architectural Considerations

Today's in-band virtualization solutions fall short of meeting these business needs. To describe it simply, in-band virtualization solutions introduce an appliance or device "in the data path." There are inherent limitations to this approach in the areas of complexity, risk, investment protection, scalability and ease of deployment (I'll be exploring this in greater detail in the second installment of this series on virtualization.)

In short, the in-band approach to storage virtualization requires you to put all of your eggs in a very small basket. Instead of more flexibility, you become constrained by a difficult-to-implement, non-scalable solution. It is also an all-or-nothing approach--all the data in the SAN has to be virtualized and pass through the virtual layer.

Historically, we have seen that any practical solution needs to adhere to certain principles:

* Solve a specific problem (e.g., disruptions to IT operations)

* Not create new problems (e.g., data integrity and increased complexity)

* Embrace the existing environment (i.e., retain all the current value and functionality of the infrastructure).

Another approach to storage virtualization, network-based virtualization (sometimes called the "out-of-band" approach), is guided by these principles. Network-based virtualization leverages the existing SAN infrastructure by employing the next generation of intelligent switch / director technology.

The implementation in the switch utilizes an open and standard approach. Developing such a technology in an open manner means that storage vendors need to work as a team with switch vendors to ensure seam-less functioning of a heterogeneous environment. This means developing standard interfaces and providing customers with a choice of switches from multiple suppliers.

An additional element that is added is a management appliance that sits "outside the data path" and is primarily focused on managing the overall virtual environment, including mapping the location of the data. No "state" or version of the data is ever held in the network. Until the data is properly stored on the array, the application is not made to believe that the job has been completed.

In terms of scalability, a networked storage virtualization solution should be able to support multiple enterprise-class arrays; aggregate a range of intelligent high-end and mid-tier arrays while providing complementary functionality such as non-disruptive migration. While it guarantees scalability, it also provides IT managers with a granular, gradual approach, allowing them to virtualize only some of the data on the SAN. It makes it possible to select, volume-by-volume, which volumes need to be virtual and which ones don't. With a system that scales up, an organization can start small with the assurance that, as it grows, the solution can scale-up, across the infrastructure.

In addition, this approach to storage virtualization protects investments in value-added functionality already on the storage array. These solutions allow the user to continue using existing array-based replication technologies.

In summary, a distributed, open, network-based approach to storage virtualization finally delivers the practical value that users have sought from this technology. No longer a technology in search of a problem, network-based storage virtualization promises to provide an unprecedented level of control over the infrastructure. It will meet the growing need for complete Non-Disruptive Operations, help to simplify and optimize the management of networked storage, and enhance the overall flexibility required to operate within today's highly complex environments.

A discussion of virtualization architecture will appear in next month's issue.

Mark Lewis is executive vice president of EMC Software at EMC Corp. (Hopkinton, MA)

www.emc.com
COPYRIGHT 2005 West World Productions, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2005, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Storage Management
Author:Lewis, Mark
Publication:Computer Technology Review
Geographic Code:1USA
Date:Feb 1, 2005
Words:1427
Previous Article:Hardware or OS virtualization software for servers? Server virtualization: the capability of partitioning a physical server into smaller virtual...
Next Article:Laptops: the "wild west" of data protection; Bad backup habits affect business productivity, security.
Topics:


Related Articles
Virtualization Makes Illusion Real.
Virtualization: One Of The Major Trends In The Storage Industry -- What Are You Getting For Your Money?
The new world of NAS virtualization: the next generation could fulfill NAS' promise. (Storage Networking).
No quick virtualization fixes: achieve the goals of virtualization through holistic storage management.
Trends in virtualization focusing on solutions, not technologies.
SAN-based data replication.
Storage clustering.
Beyond storage consolidation: the benefits of iSCSI SANs.
Scaling NAS with adaptive resource switches.
Storage virtualization--questions for potential adopters Part 3 of 3.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters