Printer Friendly

Germany : Mellanox s FDR InfiniBand Solution with NVIDIA GPUDirect RDMA Technology Provides Superior GPU-based Cluster Performance.

MellanoxA Technologies, Ltd. , a leading supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, today announced the next major advancement in GPU-to-GPU communications with the launch of its FDR InfiniBand solution with support for NVIDIAA GPUDirect remote direct memory access (RDMA) technology.

The next generation of NVIDIA GPUDirect technology provides industry-leading application performance and efficiency for GPU-accelerator based high-performance computing (HPC) clusters. NVIDIA GPUDirect RDMA technology dramatically accelerates communications between GPUs by providing a direct peer-to-peer communication data path between Mellanox s scalable HPC adapters and NVIDIA GPUs.

This capability provides a significant decrease in GPU-GPU communication latency and completely offloads the CPU and system memory subsystem from all GPU-GPU communications across the network. The latest performance results from Ohio State University demonstrated MPI latency reduction of 69 percent, from 19.78us to 6.12us, when moving data between InfiniBand-connected GPUs, while overall throughput for small messages increased by 3X and bandwidth performance increased by 26 percent for larger messages.

MPI applications with short and medium messages are expected to gain a lot of performance benefits from Mellanox s InfiniBand interconnect solutions and NVIDIA GPUDirect RDMA technology, said Professor Dhableswar K. (DK) Panda of The Ohio State University.

The performance testing was done using MVAPICH2 software from The Ohio State University s Department of Computer Science and Engineering, which delivers world-class performance, scalability and fault tolerance for high-end computing systems and servers using InfiniBand. MVAPICH2 software powers numerous supercomputers in the TOP500 list, including the 7th largest multi-Petaflop TACC Stampede system with 204,900 cores interconnected by Mellanox FDR 56Gb/s InfiniBand.

The ability to transfer data directly to and from GPU memory dramatically speeds up system and application performance, enabling users to run computationally intensive code and get answers faster than ever before, said Gilad Shainer, vice president of marketing at Mellanox Technologies. Mellanox s FDR InfiniBand solutions with NVIDIA GPUDirect RDMA ensures the highest level of application performance, scalability and efficiency for GPU-based clusters.

Application scaling on clusters is often limited by an increase in sent messages, at progressively smaller message sizes, said Ian Buck, general manager of GPU Computing Software at NVIDIA. With MVAPICH2 and GPUDirect RDMA, we see substantial improvements in small message latency and bisection bandwidth between GPUs directly to Mellanox s InfiniBand network fabric.

2013 Al Bawaba (Albawaba.com)

Provided by Syndigate.info an Albawaba.com company
COPYRIGHT 2013 Al Bawaba (Middle East) Ltd.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2013 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Publication:Mena Report
Date:Jun 18, 2013
Words:397
Previous Article:Germany : Mellanox Demonstrates World s First InfiniBand Connectivity with NVIDIA Tegra ARM Processor.
Next Article:Germany : DEUTSCHE TELEKOM funds EUR 450 million to upgrade BROADBAND INFRASTRUCTURE.
Topics:

Terms of use | Copyright © 2017 Farlex, Inc. | Feedback | For webmasters