Printer Friendly

The split-path architecture: built for enterprise class virtualization.

Today, most application downtime results from planned outages associated with common IT maintenance tasks. For storage administrators, many of these tasks are related to moving data to the appropriate resource at the appropriate time. Common activities such as lease rollovers, tech refreshes, and the realignment of resources to meet service level agreements require applications to be taken offline while these tasks are performed. Block storage virtualization is a technology that enables IT administrators to migrate data to the appropriate resource at the right time without impacting application availability. Non-disruptive data migrations can be performed not only within an array but also across arrays from the same or different suppliers. In addition, block storage virtualization provides a central management point for storage administrators to improve utilization rates while lowering administration costs.

First Generation Block Storage Virtualization Architectures

Block storage virtualization utilizes an intelligent hardware platform that is either embedded directly into the storage network or sits between virtualized storage and the SAN. This moves the scope of functions such as data access and volume management to the networking layer, expanding them beyond the boundaries of individual storage and server platforms. Block storage virtualization masks the complexity of connecting heterogeneous storage arrays on the back end, presenting the associated capacity in a consistent way from a common, virtual resource pool.

Because of these capabilities, block storage virtualization enables several powerful usage models and benefits, including simplified management of heterogeneous infrastructures, non-disruptive operating capabilities, and the seamless movement of data across tiers and types of storage.

The first generation of block storage virtualization solutions consisted of virtualization software loaded on a general-purpose server running a standard operating system such as Linux or Windows. These solutions were typically in-band, meaning that I/O handling and mapping of logical to physical storage devices were all handled by the same, limited processing resources.

While this approach simplifies management by centralizing the volume management function, introducing a general-purpose server lacking specialized hardware and sufficient processing power at the core of storage network infrastructure inevitably limits performance scalability in enterprise environments. As the size of the infrastructure grows, the aggregate storage behind the virtualization server contains many more storage processors and much more cache than the virtualization server. As the number of I/O requests and simultaneous mobility sessions increases with the size of the infrastructure, the general purpose processing resources of the server become overwhelmed. As a result, applications see significant increases in latency. This architecture also caches data before writing them to attached storage resources while the I/O mapping functions are processed. This places data stored in cache at risk in the event of failure of the virtualization device.

Network based appliances and array-based block storage virtualization solutions are also in-band solutions that suffer the same performance scalability and data integrity risks as those built on industry-standard servers. These solutions place data at risk by caching data and acknowledging writes to the host before the data has been safely written to attached arrays. Like server based solutions, as the size of the infrastructure grows, these systems cannot match the processing power of the aggregate back-end storage resources resulting in a bottleneck that increases in latency.

First-generation out-of-band solutions looked to resolve the scalability of their in-band competitors. Their architectural approach separated the data movement and control operations, putting control path processing on a standard server and the data path management on host-resident agents. While this improved performance, it brought back one of the big challenges associated with logical volume managers: loading, maintaining and qualifying software on each server.

What was lacking was an architecture that delivered a combination of consolidated management and enough bandwidth and processing power to handle demanding enterprise workloads.

Intelligent Switches and Specialty Software

The combination of intelligent switches and specially designed software applications creates an architecture that effectively addresses both the scalability and data integrity limitations of previous solutions. Besides providing basic layer 2 switching capabilities, intelligent switches are characterized by incremental hardware and processing power designed specifically to host intelligent applications in the storage network. The additional compute resources are generally provided by an ASIC (Application Specific Integrated Circuit) at each port to manage each I/O, in-line at wire speed. Brocade and Cisco are two leading vendors currently producing these intelligent switches.

With the development of intelligent SAN switches and specialized software, virtualization services such as centralized volume management, data mobility and replication can now be delivered in real time. The specialized software running on a dedicated, highly available appliance interacts with the intelligent switch ports to manage I/O traffic and map logical-to-physical storage resources at wire speed. This results in a solution that delivers enterprise-class scalability and data integrity.

The Split-path Architecture

Unlike other approaches to networked storage virtualization, this model--based on what is known as a split-path architecture--takes advantage of intelligent SAN switches to perform I/O redirection and other virtualization tasks at wire speed.

The value of the split-path architecture can be seen in a typical environment in which data flows into an appliance- or controller-based system. While the data is flowing, the system CPU is typically occupied by I/O-intensive requests and other processing requirements. As a result, resources such as CPU and cache are overwhelmed, resulting in network bottlenecks and application latency.

The split-path architecture avoids this scenario with the help of real-time, port-level processing performed by dedicated ASICs. These ASICs open Fibre Channel frames and perform the I/O mapping required to reroute the frames in less than 20 microseconds per frame. Writes are only acknowledged to the server after they have been safely written to attached storage. All of this happens at "SAN speed," increasing performance while protecting data. Data is protected because there is no risk of losing it if a virtualization device is compromised. Scalability is enhanced because the funnel created by general purpose processing has been removed. Server resources now have direct access to the full storage processing power of back-end storage resources. The result of this new architectural model is enhanced data integrity and improved scalability driven by increased processing power.

Future Developments

Currently, forward-looking virtualization vendors are working with the intelligent switch vendors to promote standard APIs for intelligent switches and applications called the Fabric Application Interface Standard (FAIS). This standard is being worked on as part of the INCITS T11.5 task group, the same group responsible for creating Fibre Channel standards. The use of a standard API creates a non-intrusive implementation that protects against future changes in underlying hardware configurations.


Block storage virtualization is a powerful technology that enables IT managers to move data to the right resource at the right time without incurring application downtime. This powerful capability enables IT managers to improve utilization rates while increasing application availability.

The split-path architecture combines intelligent SAN switches with dedicated software to deliver scalable performance and improved data integrity. This protects existing investments in the processing power and cache of the infrastructure by eliminating the bottleneck created by in-band solutions. As a result, the split-path architecture provides both scalable performance and increased data integrity to meet the requirements of enterprise-class environments.

Doc D'Errico is the vice president of the Infrastructure Software Group at EMC Corporation. He holds eight storage technology related patents, has submitted eleven others and is recognized for driving EMC and the industry to better and broader ways to deal with system interoperability and standardization. Doc is also the author of the Introduction to the Universal Command Guide.
COPYRIGHT 2006 West World Productions, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2006, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Storage Management
Author:D'Errico, Doc
Publication:Computer Technology Review
Date:Sep 1, 2006
Previous Article:The value of Linux for the SMB market.
Next Article:Healthcare and storage: patient imagery has changed the dynamics of healthcare as we know it today.

Related Articles
The Age Of Virtualization.
Virtual storage equals real confusion.
Storage clustering.
Cisco extends its leadership in network-based storage virtualization through Intelligent Fabric Applications.
Troika announces SAN Volume Suite.
Storage virtualization, Part 1 of 3: delivering non-disruptive operations, flexibility and simplified management.
Beyond storage consolidation: the benefits of iSCSI SANs.
Storage virtualization--architectural considerations, Part 2 of 3.
Intelligent switches--moving beyond virtualization.
NAS virtualization simplifies file storage management.

Terms of use | Privacy policy | Copyright © 2022 Farlex, Inc. | Feedback | For webmasters |