Making Sense Of 2Gbps Fibre Channel.
In fact, the aggressive advance in speed has been part of the reason for Fibre Channel's success. FC has become much more than a peripheral attachment medium; it is really the basis for the entire Storage Area Network industry. (While there is nothing that limits SANs to Fibre Channel, the fact remains that virtually nothing else is used today.) Moreover, the Fibre Channel physical interface work at 1GHz has become the basis for all the new high speed serial interfaces, including InfiniBand, Gigabit Ethernet, and Serial ATA.
There has been growing interest in seeing Fibre Channel performance improve. To this end, Fibre Channel is advancing on two fronts. First, more is being done to employ latent performance features in the current definition of FCAL. Second, vendors are starting to deliver 2GHz Fibre Channel products. Let's look at the latter first. With high-performance disk drives that can sustain up to 40MB/sec, a gigabit interface cannot support three drives at their maximum throughput. (Of course, in random I/O environments, more could be handled on a 100MB/sec channel. Long sequential transfers require the fewest number of drives to saturate the interface.) Drive data rates will continue to increase and it is clear that a faster connection will be needed.
What Is "2Gbps" Fibre Channel?
Production Fibre Channel drives typically have two separate Fibre Channel interfaces. Each of these has separate links supporting communication to and from the drive at a rate of 1,062MHz or approximately 1GHz. This translates to 106MB/sec. Here is how the arithmetic is done:
* 1.062GHz x 8/10 = 849Mbps due to 8b/10b encoding using 10 bits to send 8 bits
* 849/8 = 106MB/sec (divide by 8 to convert bits to bytes) Doubling the speed produces over 200MB/sec: (2.124GHz x 8/10)/8 = 212MB/sec (or approximately 2Gbps)
This is not just theory. At CeBit this year, Seagate, Eurologic, and Emulex demonstrated an early implementation of 2Gbps FCAL transferring 200MB/sec per loop. Using both loop interfaces on a set of drives, it is possible to achieve 400MB/sec bandwidth. (Of course, to get this, a system will require two host ports and something faster than a PCI bus.)
The industry has addressed the issue of backward and forward compatibility. The Small Form Factor committee, which has control over connector definitions, has standardized a method to accommodate the requirements of new 2Gbps systems and compatibility with current 1Gbps implementations. In the 40 pin standard FCAL drive connector, three pins had been allocated for 3-volt power. These have not been used and--as is obvious from the direction of ASIC technology--will not be. These pins have been reclaimed and redefined to specify the speed at which a drive should run. The table shows how the pins are used.
Though there are no known implementations of the original 3.3-volt definition, this change would not cause a problem if there were implementations. If a new drive were plugged into an existing cabinet that either did not have this new use for these pins in place or even had used them for 3.3 volts, the drive would operate normally at 1GHz.
If an old drive is plugged into a new cabinet, it could also operate in 1GHz mode. (After all, that is the only speed at which it can run. Note that a loop of Fibre Channel drives must be populated with drives all running at the same interface frequency.) With this standard, it is possible to design a cabinet that the host could manage by means of SCSI Enclosure Services or other control methods. This could include determining the speed at which the cabinet should operate. Thus, it is possible to design a cabinet that could be populated with 1GHz drives and operate at that speed. When those were upgraded or replaced with 2GHz drives, the cabinet could be switched to run at 2GHz.
For any application needing bandwidth, the benefit is obvious. The 200+ MB/sec bandwidth enables a single host adapter and loop of drives to deliver data for the most demanding video applications. It will even meet the requirements of some HDTV processing.
Transaction processing systems will also see the value in a couple of ways. First, response times will be better: the data is traveling across the wire faster. Also helping is the fact that the port delays in each device are halved. (A word going across the interface is buffered in each device so that the device can accommodate the slight differences in data rate between individual links in the interface.) Second, it will be possible to support more devices per loop while maintaining the same level of service. This can lead to more cost-effective configurations accomplishing the same amount of work.
An I/O channel is used to move data between a computer system and storage. Ideally, a channel should be occupied moving only user data. Unfortunately, that is not possible. Channel bandwidth is also consumed deciding which devices should use the interface next (arbitration) and for moving ancillary information (overhead) such as status and commands. These take away from the theoretical capability of the channel to transfer user information.
Traditional I/O channels are buses. That is, they are essentially tunnels that allow one thing to be going through them at a time. Parallel SCSI and ATA, for instance, can only be engaged in a single point to point transfer at a time. Fibre Channel Arbitrated Loop is a little different. On a single loop there actually are two, connections between any two devices. There is the outbound half of the loop going from the transmitting device to the receiving device. Then, there is the inbound half going from the receiver back to the sender and completing the loop. These are actually physically separate connections and Fibre Channel has always allowed for the possibility of separate communications being in process on each half at the same time. This capability, called full duplex communication, can be used to make Fibre Channel more efficient than traditional buses and vendors are delivering products with some degree of full duplex support. Here is how they work.
In Fig 1, a single loop of disc drives is connected with a single host. Let's assume drive 2 is sending data to the host (green arrow) as the result of an earlier read command. Using the full duplex capability of FCAL, the host could, at the same time, send commands (blue arrow) to drive 2 to be queued for future operation. This avoids the need to arbitrate at another time for the right to send the commands and the time transmitting these commands would occupy the loop. If the drive were receiving write data from the host on the blue (outbound) half of the loop, it could send status or receive buffer credit tokens on the green (inbound) half.
If a system is kept busy with work for all the drives on an interface, it is possible to bury much of the overhead that burdens a bus architecture behind user data transfers. Fibre Channel can get much closer to the ideal of always being occupied with user data transfers.
Typically a parallel SCSI subsystem can do about 15,000 to 19,000 single sector operations per second. Subsystems using the same model drives, but with Fibre Channel interfaces, have achieved over 30,000 for the same kind of I/Os, simply by taking advantage of this full duplex feature (and this was at 1GB/sec).
Further, a single loop can only have two devices talking at a time. While disk drives have some full duplex capability, they typically can only be transferring user data in a single direction at a time. On the other hand, host adapters are available today that can sustain inbound and outbound user data transfers concurrently. How can storage be configured to take advantage of this?
The answer is, essentially, by using Fibre Channel switches and another Fibre Channel drive feature, public loop support. Public loop support refers to the ability of disk drives to attach directly to a switch and not just behind a computer system. When a host adapter has access to drives on multiple loops on a switch and has a workload that keeps the attached drives busy, it can achieve some pretty remarkable performance.
In Fig 2, the host adapter in the CPU at the right is communicating with a drive on the red loop and a drive on the blue loop at the same time. (The drives could be spread across more than two loops; only two are needed to illustrate the feature.) In this configuration, the system can realize the performance potential of the Fibre Channel full duplex capability. Tests have shown as many as 45,000 I/Os per second and, even at this data rate, the interface was not yet saturated. Again, this testing was done at 1Gbps.
Certainly, performance has not been the only exciting aspect of Fibre Channel. Its many other features--cable distance, attachment count, dual porting, network-like architecture--have had a lot to do with its success. Now, we can look forward to all of these features on Fibre Channel storage at 200MB/sec and full duplex capability.
Dave Anderson is the director of systems storage architecture at Seagate Technology (Scotts Valley, CA).
|Printer friendly Cite/link Email Feedback|
|Author:||Anderson, David Poole|
|Publication:||Computer Technology Review|
|Date:||Jun 1, 2000|
|Previous Article:||What's In A Name? -- Differentiating SAN And HAS.|
|Next Article:||Maxtor's 'Diamonds' Catch Google's Eye.|