Printer Friendly

Ensuring fairness in the back-end of storage systems.

Over the past year, major storage system providers have steadily been migrating their systems to "switched back-end architectures" to improve system RAS (Reliability, Availability and Serviceability) characteristics and lower overall storage ownership costs. This migration from shared to effectively point-to-point switching has led to discoveries in handling large numbers of devices within the Fibre Channel Arbitrated Loop (FC-AL) environment, resulting in improvements to FC-AL for switched solutions, and--surprisingly--for shared environments as well.

This article examines the concept of "fairness" and how to ensure fair device access in the back-end of storage systems. Fairness, especially when a shared bandwidth protocol like FC-AL is converted to switching, must be fully understood to appreciate the performance and system level ramifications of accessing large numbers of drives by a small number of controllers.

What is Fairness?

Fairness is an aspect of a protocol which ensures that devices communicating with each other have guaranteed access to all devices in a system. In the case of storage arrays, fairness helps ensure (amongst other things) that service requests are met with guaranteed access to all drives. Quality of Service (QoS), by contrast, is a mechanism within a protocol that ensures a guaranteed bandwidth is allocated among multiple channels between devices. These two principles may seem similar in nature, but the difference is significant. QoS may ensure all devices can gain access amongst themselves, but that doesn't guarantee any level of efficiency. In back-end storage systems where there can be over a hundred drives communicating with a single controller, QoS mechanisms can allow each drive access to the controller; but since the controller scatters its requests among the drives in what could be a relatively non-deterministic manner, performance could be severely affected. Within the FC-AL protocol, a fair arbitration mechanism is used allowing devices to gain control of the shared bandwidth. A device gaining control begins a "tenancy," which is the period in which the controlling device transports its frame or control data. FC-AL uses a "fairness window" to ensure that all devices may gain fair access and initiate a tenancy. Within this fairness window, when all devices wanting a tenancy have gained and subsequently released their control, the fairness window is reset and the whole process begins anew. This mechanism has been well designed and tested at Fibre Channel Industry Association (FCIA) plugfests, but usually only in limited topologies and with a limited number of devices.

When the shared bandwidth architecture is replaced with a switched architecture, the rules for handling fairness change slightly and opportunities arise to ensure guaranteed access to all devices--regardless of their distance from each other in a given topology. For example, in the topology shown in Figure 1, targets in SBOD (Switched Bunch Of Disks) 4, should have just as guaranteed an access to the initiators as the targets in SBOD 1. Balanced fairness, whether switched or shared and independent of loading or scale, require additional mechanisms beyond what is included in the protocol to provide those guarantees.

[FIGURE 1 OMITTED]

Fairness Problems

In a shared environment, all devices in a system are involved in each transaction. The drawback of the shared architecture is each device contributes latency, slowing system responses and significantly limiting performance as the number of devices increases. There are many scenarios where back-end systems fail to provide fair access to devices, regardless of whether the architecture is shared or switched.

A very simplistic example consists of an initiator and two hard drives. The first hard drive gains control of the system and transmits frames until all the buffers on the initiator are full and the initiator can no longer handle any input. The second hard drive then gains control of the system and attempts to transmit its data. Since the buffers on the initiator are still full, the second hard drive is denied the ability to transmit its data and the initiator closes the connection. On the next fairness window, the first hard drive gains control of the system again and transmits its frames. This is followed by the second hard drive not able to transmit its frames again. This cycle can repeat until the second drive generates a reset of the system or the initiator removes the drive from its list of targets. This is a very simple, conceptual example of drive starvation. Even in this simplistic example is the scenario of a fairness window working properly, just as designed, but still causing no data to be transported for certain devices.

Most storage system vendors have a recommended drive limit in the back-end of their storage systems due to the performance and stability issues introduced as drives are added. Detecting fairness issues in systems with smaller drive counts is difficult, if not impossible, and the issues have gone largely undiagnosed due to the complexities involved. This is an important observation: As switching has become more prevalent and systems with larger numbers of drives are being tested for comparison between the new switched and the old shared systems, instabilities in the large shared systems become more common. The instabilities are observed as performance dips, access delays and, in the worst cases, as a complete loss of access to certain drives.

Back-end switching solutions dramatically reduce the amount of latency in a system and allow a full complement of hard drives (up to 125) to be accessed. Because of the full drive count capabilities, these back-end switching solutions have been tested from the beginning with full system drive limits. As a result, full system topologies and the need to apply a shared bandwidth fairness algorithm to switched bandwidth systems, fairness has been analyzed in more detail than ever before. An example of why the fairness algorithms need to change when converting to a switched bandwidth architecture can be described through the use of Figure 1 again. Initiator H1 requests information from both T1 and T60. Subsequently, both T1 and T60 prepare their data and both attempt to open H1. With switching, T1 has immediate access to HI, but T60 must work its request through each SBOD until it can present its request to H1. Without the shared passing of control from device to device, the back-end switches must implement mechanisms to ensure T1 and T60 both are provided access to H1--and in such a way that repeated accesses don't starve out T60.

Some of the issues for which embedded storage switch providers must supply solutions include:

Ping Pong. A common problem in switched back-ends comes from collisions when switches are serially connected (Figure 1). A phenomenon known as a "ping-pong" can occur causing retry after retry of a connection until timing allows one participant to finally "win." The effects of this type of collision become very noticeable in certain configurations. A similar problem exists in configurations, (Figure 2.)

[FIGURE 2 OMITTED]

Device Starvation. In either shared or switched topologies, the possibility exists where one or more devices in the back-end fail to gain access. The result of this type of fairness problem is device starvation, with access from that device failing repeatedly and significantly. Device starvation will usually cause an initiator to time out and will cause system level problems.

Resonance. There are cases where a system resonance can occur that will cause non-deterministic system behavior resulting in sporadic starvation or access instabilities. These cases occur due to system resonance based on topology, number of targets, number of initiators, block size, read/write ratio, random/sequential ratio and other factors. Any type of system resonance can occur in both shared and switched topologies.

Extensive experience with field implementations from several different storage system vendors to ensure fairness within both the switched and shared environments is a requirement for any embedded storage switch provider. Additionally, significant development efforts in the development of comprehensive test plans, scripts, and configurations are needed.

QoS algorithms may actually provide a solution, but the performance degradation of these types of solutions generally provides little or no advantage over shared bandwidth architectures.

Replacing shared bandwidth infrastructures with switched infrastructures provides the benefits of reliability, availability, serviceability and performance while scaling. Embedded storage switches exist that successfully solve storage system fairness issues with solutions that apply regardless of topology, architecture or number of devices in a system.

www.vixel.com

Thomas Hammond-Doel is director technical marketing at Vixel Corporation (Bothell, WA)
COPYRIGHT 2003 West World Productions, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2003, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Connectivity
Author:Hammond-Doel, Thomas
Publication:Computer Technology Review
Date:Sep 1, 2003
Words:1384
Previous Article:Advanced storage roadmap.
Next Article:Engineering challenges to storage system protocols: diagnosing problems involving multiple protocols presents complex engineering challenges.


Related Articles
Emulex extends cert with Hitachi storage solutions.
Virtual private storage delivers large-scale storage consolidation payoff. (Storage Networking).
Inspeed thinks dig by thinking small. (Computer Technology Review Editors Choice 2002).
Back-end switching in storage server design: improves the performance and availability of storage systems. (High Availability).
High transaction Websites challenge storage admins: businesses turning from DAS to SAN.
The fifth coming of SANsymphony.
iSCSI deployment in business IP storage network.
Evaluating the requirements for the storage network backbone.
Client computer storage consolidation.
Emulex works with HP to deliver SAN connectivity for HP BladeSystem.

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters