MELLANOX TECHNOLOGIES DELIVERS INFINIBAND SERVER BLADE REFERENCE DESIGN.
"The Nitro reference design has been developed to accelerate the time-to-market of production worthy OEM InfiniBand Architecture based blades using Mellanox's industry leading silicon," said Eyal Waldman, CEO at Mellanox. "The Nitro reference design provides much more then just a change in form factor for servers, it demonstrates the major advancements that the InfiniBand Architectures' attributes, features and benefits bring to blade computing."
"As the provider of the industry's leading intelligent platform for networking storage, Brocade is pleased to work with the IBTA and companies such as Mellanox to further the development of InfiniBand technologies," said Jay Kidd, Brocade Vice President of Product Marketing. "As server performance continues to increase, InfiniBand technologies promise to enable companies to deploy larger, more scalable, and highly optimized computing environments that will support new applications such as server clustering. The powerful combination of InfiniBand-based server environments with today's high performance storage area networks will enable new levels of availability, security, scalability, and manageability in the data center."
"InfiniBand I/O is a powerful new technology for high performance and high reliability computing platforms," said Dr. Tom Bradicich, Director of Architecture & Technology, IBM eServer/a xSeries/b. "The Nitro platform demonstrates the intersection of two very complementary technologies: InfiniBand I/O and blade servers -- an innovative combination well suited for future dense server architectures."
"As a leading driver of the InfiniBand architecture specifications, Intel foresaw the benefits that the InfiniBand architecture would deliver to blade computing," said Jim Pappas director of initiative marketing for Intel's Enterprise Platform Group. "Blade server reference platforms like Mellanox's Nitro blades offer a first look at the tremendous value that InfiniBand architecture brings to blade computing."
In addition to providing a high performance data center computing platform, the reference design serves as a software development platform for OEM products, as well as, the PICMG 3.2 initiative (targeted to be finalized in mid-2002), which defines InfiniBand as the standard interconnect for the next generation of telecom and data center systems.
The Nitro server blades are based on an Intel/c 1.26 GHz Pentium/c III processor and ServerWorks LE 3.0 chipset. The server blades support up to 3 GB of memory and are both diskless and headless (no video monitor required). InfiniBand architecture's hardware transport overcomes the latency and bandwidth penalties of LAN based remote storage. Furthermore by removing the disk and video controller, critical power and area resources are freed up for CPU and memory. This allows Nitro server blades to support higher speed processors that are able to apply the bulk of their compute cycles to the application, as the InfiniBand hardware transport eliminates the heavy CPU load of the TCP stack. Therefore, not only does the IT manager experience better performance from higher clock speeds and more usable memory, but also gains as much as 50% more usable cycles when utilizing low latency, low overhead RDMA InfiniBand protocols. Together, these components implement a fully scalable server blade system with performance that rivals and can even exceed the performance of large-scale multi-way systems.
The fully managed, non-blocking 20 port or 16+4 switch blade offers a total throughput of over 160 Gb/sec. The switch aggregates sixteen 2.5 Gb/sec (1X) ports from the backplane to four 10 Gb/sec (4X) uplink ports on the front of the chassis. The four 10 Gb/sec ports can be used to connect multiple chassis's together to create large clusters of server, I/O or storage blades.
The passive backplane utilizes a dual star configuration to redundantly link 16 server or I/O slots through redundant InfiniBand fabric switches. The InfiniBand fabric also provides dedicated management lanes for chassis and baseboard management and support for keyboard, mouse, power and management traffic, thus greatly reducing the number of cables required for server clusters.
Mellanox is offering customers a complete Product Development Kits (PDK) for the Nitro platform. The PDKs include a 16 + 4 port Nitrex switch, Nitro Server Blade, and complete chassis system with integrated backplane, power supply and fans. All PDKs includes schematics, layout, bill of materials, and a software development kit (SDK). The SDK contains driver development code, InfiniBand Architecture Verbs implementation, application examples, and debug/development tools, enabling customers to develop InfiniBand systems based on the reference software. The PDK is also supported by software management and I/O solutions from 3rd party developers, including JNI, Lane 15 Software, OmegaBand, Vieo and Voltaire.
The Nitro InfiniBand architecture reference chassis platform is available today. In single piece quantities the Nitrex 16+4 switch is priced at $15,000, the Nitro InfiniBand architecture server blade is $5000, and the InfiniBand passive backplane and chassis is $7,500.
|Printer friendly Cite/link Email Feedback|
|Comment:||MELLANOX TECHNOLOGIES DELIVERS INFINIBAND SERVER BLADE REFERENCE DESIGN.|
|Publication:||EDP Weekly's IT Monitor|
|Date:||Feb 4, 2002|
|Previous Article:||IBM AND MATRIXONE SIGN GLOBAL STRATEGIC ALLIANCE TO PROVIDE COLLABORATIVE PRODUCT COMMERCE SOLUTIONS.|
|Next Article:||FORCE ADDS SYSTEM 4 TO FLEXOR LINE.|
|InfiniCon Systems announces next gen 30Gbps InfiniBand switches for high performance computing fabrics.|
|NEC ships first Itanium 2 processor-based blade server.|