Printer Friendly

Evaluating the requirements for the storage network backbone.

It was only a few years ago that storage area networks (SANs) were exclusive to early-adopter technologists and those who could justify the return on investment (ROI) from what would be a large investment in hardware, software and expertise. Now that SAN ROI is considered proven and applicable to most enterprise data centers, SANs are one of the key information technology assets accelerating productivity and trimming the bottom line for today's enterprise. Through this transition, SAN technology has evolved, with some of today's most significant technology evolution occurring not in storage or application resources, but in the network that connects them.

Historically, the network connectivity portion of the SAN has been purchased in conjunction with the storage array. Each time IT purchased new storage, it would be accompanied by network components (switches, cables, etc.) to allow connectivity to the application hosts. This incremental approach to storage networking was appropriate for the early stages of SAN deployment, as SANs were usually tactical rollouts associated with particular applications. However, project-oriented rollouts are giving way to a more strategic "backbone" architecture that is independent of particular storage and server resources and allows higher disk utilization through increased consolidation. As enterprise IT professionals begin to architect the storage network backbone, it is crucial to consider fundamentals that will serve through several data center lifecycles, including scalability, modularity, interoperability, visibility and control.


To this day, many SAN deployments have consisted of tactical, project-oriented architectures. Built upon 8- and 16-port fabric switches, these "SAN islands" provide effective connectivity for a particular application or storage resource, but do not scale to meet enterprise-class, long-term requirements. To examine the scalability limitations of today's fabric switches, it is important to understand the requirements for inter-switch links (ISLs) in typical core/edge fabric architectures.

IT professionals are now considering the opportunity to build a strategic, independent network infrastructure that will scale to meet the enterprise's needs for the next several years--the storage network backbone. The storage network backbone provides connectivity for hundreds of storage and application resources without wasting costly ports to connect other switches.

The strategic independence of the storage network backbone has empowered enterprise IT professionals to build and manage a more universal, utility-based infrastructure that can scale beyond the limitations of traditional SAN fabrics. While there will always be tactical decisions based on individual SAN applications, and those applications may require specific network components for support, the most strategic network is one that can universally support a diverse set of requirements now and into the future.


A main focus of the storage network industry over the last few years has been to promote overall interoperability among SAN components. In order to break away from the exclusive "early-adopter" stigma, SAN deployments are expanding beyond unique and specific interoperability certifications, where connectivity purchases were an "accessory" to disk purchases. Traditionally, SAN configurations have been built using a series of "interoperability matrices" that include storage subsystems, host operating systems, HBAs and any other components in the data path. Referencing a matrix from one or a number of SAN vendors has traditionally been necessary for support agreements, but with the evolution of standards and the sheer volume of SAN components and versions, SAN interoperability and support has been elevated to a more mainstream implementation. It is no longer realistic to expect SAN architects to cross-reference interoperability matrices (some of which are nearly 1,000 pages), where vendors are adopting a more universal level of support.


Network growth not only means being able to satisfy higher port count requirements, it also means being able to accommodate future rates, protocols and services. While the vast majority of today's storage networks are being connected via 2GB Fibre Channel, enterprise IT organizations are anticipating the deployment of 4GB and 10GB Fibre Channel, iSCSI, Fibre Channel over IP and network-based storage services. However, building standalone networks for each new rate, protocol and service contradicts the strategic goal of building the storage network back-bones as an independent utility, and can significantly reduce the ROI associated with network infrastructure purchases.

The goal of the storage network backbone architecture is to be modular enough to accommodate future network directions in the least disruptive and most cost-efficient manner possible. In order to do so, the storage network must offer new levels of flexibility, fault tolerance and investment protection.

Flexibility in a storage network switch allows the SAN architect the ability to host multiple network services (rate, protocol, etc.) in the most appropriate increment, under the same managed system. By limiting the number of ports on a single hardware interface, users can also contain port failures, limiting downtime to adjacent ports when swapping out the failed port. Investment protection comes not only from the ability to support additional interface technologies, but integrated services as well.


In today's enterprise, SAN performance is typically expressed through a number of metrics that are collected directly from the SAN devices, helping administrators diagnose the overall effectiveness of the infrastructure in supporting business applications. SAN device performance metrics may come from the host, such as CPU utilization, from the storage array, such as disk-seek times or from the network, such as port utilization statistics. Traditionally, much more administrator time has been spent evaluating the performance of the host and array, where much more tangible metrics have been available.

The storage network, however, has yet to be considered either a suspect for degradation, or as an opportunity for overall improvement. This is largely due to the limited amount of visibility into storage network performance, and the false impression that a network has latent bandwidth when it is actually congested and causing application-level performance degradation. As the storage network backbone takes shape, understanding network performance, particularly when and where congestion is occurring, is one of the most effective ways to manage the overall performance of the SAN.

Understanding Congestion

When the purpose of a network is to allow for communication among devices that share common resources, every network presents the opportunity for congestion. This scenario is only magnified in storage networks, where there are typically many hosts, or initiators, communicating with relatively few storage ports, or targets. Many are under the false expectation that a network built with switches that are "non-blocking" (with the ability to service all ports at line rate) can never be at fault when overall SAN performance falters and application performance degrades. However, no amount of raw switch performance can overcome the fundamental bottleneck that arises from two or more initiators requiring access to the same target.

In response to congestion, the Fibre Channel protocol uses a very carefully architected system of fabric device communications and creditbased transmissions in order to ensure network stability and traffic integrity. When presented with the potential for congestion, such as more traffic from inbound ports than a destination port can handle, Fibre Channel switches do not drop frames--unlike Ethernet/IP networks. Instead, "backpressure" is used to squelch all inbound traffic bound for the same destination to a level where the target can accommodate all requests. Translated to the device level, typical switches equally share storage mapped to a storage port across all servers requiring access. Even conventional director-class switches, with Virtual Outbound Queue (VOQ)-based architectures, manage all traffic bound for the same out-bound port from the same queue.

Even where administrators note a port utilization of, say, 160 MBps on a 200 MBps link and consider the network underutilized, there is likely network congestion holding back the performance of SAN applications during peak traffic loads. This can present quite a dilemma for managers of growing storage networks, who must yield the best performance possible from an infrastructure meant to consolidate resources and increase utilization levels. Compounding the problem is that many of today's storage networks do not provide visibility into network congestion beyond high-level port utilization statistics.

A New Era of Network Visibility

In response to the fundamental and lurking congestion problem are new levels of network intelligence that offer visibility into the storage network in order to decipher where network congestion is compromising application performance. While there is much discussion about "intelligent" fabrics and switches, much of what that intelligence refers to is the integration of storage services such as replication, copy, and even such heavy lifting tasks as volume management. These integrated services are a compelling consideration (and another topic entirely) but don't add any network-level intelligence, where the fundamental requirement exists. The most intelligent storage network backbones are now being built with the connection-level intelligence necessary to present traffic patterns in a granular way. Architecturally, these connection-oriented switches maintain a separate queue for each connection, rather than simply for each outbound port.


The opportunity to build storage networks with connectionlevel intelligence dramatically changes the way traffic (in this example) is managed by the switch, creating an opportunity for the network administrator to not only view traffic patterns in the context of each individual connection (instead of the aggregate, port-based statistics), but also dictate how the switch should service each connection. The administrator can, in effect, create very specific policies for network services in order to ensure necessary application-level performance requirements

With connection-level intelligence in the network, the SAN can not only differentiate between the traffic from multiple sources within the same port, but also has the ability to treat each stream differently. Once this basis for connection-level intelligence has been established as a core capability fabric-wide, there are many dimensions that may be introduced--such as time-of-day settings to alter the network profile based on the changing enterprise priorities on a regular pattern, or on-the-fly to react to an urgent corporate priority.


The storage network is quickly evolving from a connectivity-only black cloud to a strategic opportunity to add value from the proposal stage through the life of the SAN. As the storage network continues to emerge as a strategic data center asset, IT professionals should evaluate their priorities for a storage network backbone that not only provides connectivity for today's storage needs, but upholds application performance under pressure, enables new services and evolves to meet future demands. By evaluating storage networking infrastructure solutions based on scalability, interoperability, modularity, visibility and control, IT professionals can better implement storage architectures that stand the test of time.

Eric Blonda is director of product marketing for Sandial Systems, Inc. (Portsmouth, NH)
COPYRIGHT 2004 West World Productions, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2004, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Storage Networking
Author:Blonda, Eric
Publication:Computer Technology Review
Geographic Code:1USA
Date:Jun 1, 2004
Previous Article:The looming SAN storm in the SMB market: part 1 of a roundtable discussion on the emerging SAN market for small to medium-sized businesses sponsored...
Next Article:Adaptive backup as a security enhancer.

Related Articles
Global Storage Networks: Their Time Is Now.
Loop Switches Speed SAN Deployment.
Fabric-based intelligence: but do a reality check on the switches before you buy.
Storage security: issues and answers.
Hardware beats out software for top honors.
Better backup and recovery: know your data and your storage.
McDATA improves security for data storage with addition of SANtegrity Security assessment.
Simplifying storage: combining the iSCSI standard with SAN functionality.
iSCSI take-up in Europe 2004-2005.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters