RaoidIO Zooms Through The Box!
At the end of March, the industry was treated to a new interconnection specification backed by a number of system and chip heavyweights. RapidIO is a new networking/communications architecture that is designed for passing data and control information among microprocessors; between microprocessors and memory; and among memory mapped I/O devices. Long term, it may replace current proprietary processor and peripheral buses and today's slower legacy interconnects such as PCI. In fact, many of the same engineers who developed the original PCI spec have been working on RapidIO, which is fully compatible with both PCI and PCI-X.
First drafted by Mercury and Motorola in June of last year, the RapidIO spec was turned over to the RapidIO Consortium and is now backed by Alcatel, Cisco, EMC, Ericsson, LSI, Lucent, and Nortel, among many other companies. Version 1.1 (version 1.0 was the original draft spec) of RapidIO promises compatibility with other new interconnects (including InfiniBand) using standard, printed circuit board technology. It boasts throughput exceeding 10Gbps utilizing low voltage differential signaling, (LVDS) technology. RapidIO technology is also transparent to application software, and does not require special device drivers. In its simplest implementation, a RapidIO endpoint can fit inside a standard field-programmable gate array (FPGA).
Think Inside The Box
RapidIO was designed to address what has become an all-too-familiar problem in the industry: communications buses that cannot keep up with the speeds of new chips. Additionally, it moves away from the 10-year old, shared-bus architecture of PCI in favor of a switched fabric design. A shared-bus architecture like PCI can only be expanded by physically widening the bus; this in turn increases signal skew and requires more pins, which increases costs.
RapidIO is an in-the-box interface (maximum distance is 30 inches) for chip-to-chip and board-to-board communications. It is a packet-switched technology conceptually similar to IP that can be implemented on standard printed circuit boards. Unlike other interconnection architectures, however, the RapidIO specification includes a separate three-layer hierarchy of logical, transport, and physical specifications, which the RapidIO Consortium says will allow scalability and future enhancements while maintaining current compatibility (see Figure).
According to RapidIO documentation, the current version (1.1) of the specification requires 40 pins per port with an 8-bit parallel point-to-point data path, or 76 pins per port with a 16-bit path, and is full duplex. (A RapidIO technical working group is currently working on a serial physical layer) By contrast, a 64-bit PCI bus has about 85 pins per interface (excluding power and ground). The physical signaling is standard LVDS operating from 100MHz to 1GHz. Data is transferred with a source synchronous clock and sampled on both clock edges. The total bandwidth per port ranges from 3 gigabits to more than 60 gigabits per second. Other bus architectures, such as Mercury's Race and Sky's Skychannel, offer good performance but are not scaleable to the levels of RapidIO.
The RapidIO interconnect looks like a traditional microprocessor and peripheral bus to software, so hardware implementations can hide things like discovery and error management from software, unless a software system elects to participate. The RapidIO interconnect architecture also boasts an optional distributed globally shared memory protocol extension that can be used for symmetric multiprocessing and shared data structures.
While RapidIO is a point-to-point interconnect, endpoints are typically not connected to one another directly. Rather, the initiator sends requests through the intervening fabric devices; transactions are constructed with request-response packet pairs. One of the presumed benefits of RapidIO is the ability to expand the capabilities of PCI while maintaining a low number of pins. Using RapidIO switches, PCIX bridges can be connected to one another, and RapidIO switches to an overall switched fabric. An InfiniBand channel adapter can be used to connect the system to a wider network, such as a SAN.
Getting The Drop On Multi-Drop
"The developmental history of RapidIO is that its predecessors hit the wall typical of multi-drop buses, which is that they are simply running out of gas," notes Tim Cox, chair of the RapidIO Consortium's marketing committee and director of strategic planning in Tundra Semiconductor's technology office. "Interconnects like Fibre Channel and InfiniBand have addressed the box-to-box issue, but there has been no standard for a high-performance in-the-box interconnect." Cox says that RapidIO can co-exist with PCI and PCI-X while at the same time solving many of the problems of those architectures, including segment and hierarchy limits and the latency associated with secondary and sub-secondary buses. "RapidIO separates the data layer from the transport and control layers, making it easy to scale up" to faster speeds.
Notably, RapidIO is not a serial bus, at least not yet. While Cox notes that the data running on the RapidIO bus is serialized--with each byte of data having its own clock and frame signal-the interface itself runs in parallel. Cox says the parallel interface was chosen because it offers the low latency and the low overhead that's needed to keep the implementation small enough to fit in standard FPGAs and ASICs. "Complex interfaces like InfiniBand take up an entire chip, sometimes more, something which isn't practical for an in-the-box interconnect," he adds. The RapidIO forum's Serial Working Group is currently studying ways to serialize the interface for short distance, box-to-box communications.
One of the most exciting aspects of RapidIO, and one that has gotten virtually no publicity, is the fact that it can be used to connect chips from different vendors in the same box. "RapidIO is an open standard with a processor bus," says Cox, "which means it provides a way to interconnect processors," even those with different instruction sets, so long as the endpoints can communicate. High-end systems running backbones in the networking and communications sector are likely to see the most benefit from such Asymmetrical Multiprocessing (AMP) capabilities.
Intel Corp., which historically has led the industry in the design of new processor buses in PCs--as well as their supporting chip sets--is a member of the RapidIO Consortium. However, it does not sit on the steering committee and does not have a large financial investment in the organization, as steering members do. "The RapidIO Consortium has solid technology, and the initial specification is very complete," says Cary Snyder, senior analyst at chip watcher Microprocessor Report. "But their problem will be acceptance by the rest of the industry; the organization only has 41 member companies, and Intel has not put up the $25,000 that steering members must contribute." Cary notes that Intel virtually controls high-volume bus adoption, and the chip giant has been developing its own, next-generation bus, code-named 3GIO.
"Intel's participation in RapidIO does not really change the market dynamics," says Bert McComas, principal analyst at InQuest Market Research. "The company is admitting that RapidIO will be successful in its niche, which is high-end AMP."
There are still other new architectures in development, including the AMD-backed HyperTransport I/O specification. "HyperTransport has [more than 100] backing companies," MR's Cary says. Indeed, if any technology can be considered a competitor to RapidIO, it's probably HyperTransport.
Formerly known as Lightning Data Transport (LDT), the HyperTransport interconnect offers a peak data transfer rate of 6.4GB per second and is aimed at the high-end PC and server markets. A number of industry heavyweights, including Broadcom, Cisco, NVIDIA, and Sun have licensed the technology, according to AMD. HyperTransport also supports PCI and InfiniBand. and. In judging HyperTransport's market viability, it's important to remember that ServerWorks, one of the industry's largest suppliers of server chip sets, was recently acquired by Broadcom, a HyperTransport supporter. This potentially gives AMD a huge advantage in terms of getting its technology to market quickly, and from a well-known chip maker.
"HyperTransport and RapidIO are addressing similar sets of problems," says InQuest's McComas. "But HyperTransport was designed as an I/O bus, and it's not as close as RapidIO to being a processor bus." McComas believes HyperTransport will find a home in monolithic systems where AMP is less an issue.
Still, Motorola does serve as a steering committee member in the RapidIO camp and has long been a dominant player in the PowerPC chip space. Potentially, this relationship gives the new interconnect a good chance of success in the networking and communications sectors where PowerPC chips are the chosen processors.
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||Company Business and Marketing; new interconnect specification|
|Publication:||Computer Technology Review|
|Date:||May 1, 2001|
|Previous Article:||Did You Know.|
|Next Article:||Virtualization Makes Illusion Real.|