A splash or all wet? InfiniBand and the server market. (Connectivity).
Intel's withdrawal was a decidedly mixed blessing for the high-speed interconnect crowd: In their anxiety, large investors asked: If InfiniBand is primarily a high-speed interconnect for data centers, how many data center managers really intend to rip and replace their existing networks? The answer they came up with: Precious few, at present. That meant that adopting InfiniBand to the data center would mean establishing a niche within the niche, as well as expanding opportunities in mid-tier and lower end server markets without pricing itself out of these price-sensitive markets.
InfiniBand in the Data Center
InfiniBand will not replace the networking configuration in a classic data center computing model. This model sports hefty monolithic servers and rigidly classified server groups and clusters, all connected via PCI buses. But if the data center moves towards a new switched model made up of commodity components connected by a high-performance switching fabric, then InfiniBand would be a logical choice. SW Aaron, vice president of marketing and business development at Topspin, believes that the data center is moving towards the commodity model because of two major trends: server consolidation and utility computing. He said, "These two trends have tangible benefits to database managers because they result in reduced ROI."
* Server consolidation: Allows mainstream data center applications like Oracle and DB2 to move from large monolithic servers to Intel-based clusters running Linux or Windows. These nodes come in at a tenth of the cost of present models, and could bring. InfiniBand into the main-stream by making data-base clusters less costly and boosting performance.
* Utility computing: Configures one server to act as the host, and uses other servers on a dynamic, as-needed basis. Utility computing is especially well suited to rack-mounted servers like blades or bricks. It replaces collections of servers running an application and its services. For example, a single application may require a host server along with dedicated servers for testing, quality control, overflow, mirroring, and so on. Each of the dedicated servers are often underutilized, but cannot be used for additional operations. A utility computing model replaces dedicated servers with server clusters that can process operations as needed. An interconnect backbone such as InfiniBand enables utility computing.
Even though the data center market can be highly profitable, it only represents around 10% of the entire server market. This is not a bad thing, but to get strong new growth right out the door means targeting InfiniBand strongly into the mid-range and lower-end server markets. In this market, the winners in the InfiniBand development stakes look like server cluster backplanes and SMP server replacements for high-end databases.
InfiniBand and Clusters
Traditional clusters are handy but are also complex and expensive. Many server and cluster software vendors are looking to a commodity-based model for this market, using Intel processors, Linux, or Windows, and standard InfiniBand interconnects.
One of InfiniBand's primary advantages in clustered environments is its performance, characterized by high bandwidth/low latency along with lower server overhead.
* High bandwidth/Low latency: InfiniBand's 4X technology enables a relatively inexpensive band-width of 10GB/sec. The "4x" refers to the number of copper wires running from the InfiniBand connector. 1x, which Intel championed, uses a single pair of copper wire per direction, while the more widely supported 4x uses four wires per direction. 4x runs parallel and quadruples 1x's speed.
* Lower server overhead: InfiniBand includes RDMA (remote direct memory access) technology. RDMA speeds up server communication by enabling servers to directly write data to other servers' memory. This dispenses with data copy operations that take up major CPU cycles, and sharply reduces latency. And RDMA does it without changes to socket interfaces' application code.
Clustering is a broad term, and includes operations such as blade computing, HPCC (high-performance cluster computing), database clusters, caching operations, firewalls, and Web server clusters. Not all clusters require heavyweight compute power, so InfiniBand vendors may initially target rack-optimized servers (blades and bricks), HPCC and high-end database server replacement projects. Wade Campbell, vice president of marketing for OmegaBand, commented, "We believe that the cluster environments to adopt it first will be the HPCC--found most often in research, some financial OLAP applications, and scientific segments--and the emerging database clusters, which are database engines from dominant database providers that use multiple Intel servers to provide high end performance without the high cost of large SMP servers." Large SMP vendors are already adding InfiniBand into their server's architecture as a backplane. Since SMP is IBM's new symmetrical multiprocessing architecture, and IBM is an InfiniBand champi on, this is scarcely surprising.
Mark Micheletti, product manager for CATC, agreed that HPCC is a primary target market. "A lot of the players that are remaining in the InfiniBand space have come to the agreement that applications for InfiniBand seem to be shifting to a more critical focus. And HPCs, high-performance computers, make sense in these very early applications." HPCC would especially benefit from using fast Intel-based clusters with a fast interconnect. This configuration can achieve supercomputer processing rates at a fragment of the supercomputer's price tag.
Rack-optimized servers such as blades or bricks are good candidates for InfiniBand backplanes. If blade server installations meet analyst expectations and grow sharply over the next few years, InfiniBand can make it on the ground floor of the new installations.
IDC believes that rackmounted appliance servers are also likely targets for InfiniBand. These servers would benefit from InfiniBand's high performance and low latency, and share InfiniBand's server-to-server connectivity, clustering and server-to-storage architectures.
Will InfiniBand make the big splash that investors are hoping for? Maybe, if it achieves key objectives without requiring administrators to replace networking elements on a large scale. Campbell commented, "We believe that the key for 1B technology in the next year will be to use it as an enabling technology to allow Ethernet connectivity to be much more efficient within the data center, increasing network utilization and drastically reducing server overhead associated with network traffic."
|Printer friendly Cite/link Email Feedback|
|Publication:||Computer Technology Review|
|Date:||Oct 1, 2002|
|Previous Article:||PC market will rebound in 2003: double-digit increase contrasts with anemic growth in tech overall.|
|Next Article:||Enter the dragon: how does Cisco's entrance impact the FC SAN market?|
|InfiniBand: Where It's Been, Where It's Going.|
|The Clash Of The Networks.|
|Clouds? Or silver linings?|
|InfiniBand and the revolution of the DataCenter.|
|InfiniBand comes into its own.|