Visit additional Tabor Communication Publications
April 19, 2010
New standard provides 10 and 40 Gigabit Ethernet clustering and storage-based applications with an efficient and proven RDMA transport over Ethernet
NEW YORK, April 19 -- HPC Financial Markets -- The InfiniBand Trade Association (IBTA), a global organization dedicated to maintaining and enhancing the InfiniBand architecture, today announced the release of a new capability, bringing the power of the Remote Direct Memory Access (RDMA) I/O architecture to Ethernet-based business solutions. The new specification is called RDMA over Converged Ethernet (RoCE), pronounced "Rocky." Products based on RoCE will reach the market during the coming year.
RoCE, built on a foundation of the highly efficient use of computing resources, brings significant benefits to end users. By reducing the number of servers needed, eliminating cabling and improving application performance, RoCE can produce energy savings and reduce the footprint of Ethernet-based datacenters. Its unique "one fat pipe" approach to server I/O gives the user great flexibility in deploying applications and is an excellent complement to virtualization strategies being deployed today. By attacking latency, RoCE increases performance in search, database, financial and high transaction rate applications.
"RoCE addresses a key concern of the enterprise -- maximizing and protecting current investments in IT," said Cindy Borovick, research vice president of Datacenter Networks at IDC. "RoCE leverages field-proven RDMA, ubiquitous Ethernet and fabric management solutions. This will benefit datacenter network end users by consolidating data, storage and clustered networking and reducing costs."
RDMA and low latency clustering has dominated the high performance computing space, as evidenced by the most recent TOP500 list. For datacenter and cloud environments, RDMA is enjoying increasing adoption in business solutions such as data warehousing, financial services and transaction processing.
Low latency and RDMA capabilities in datacenter fabrics enable end-users to achieve significantly higher and deterministic transaction rates while increasing the efficiency of clustered servers and storage systems and reducing energy consumption – resulting in significant ROI benefits.
RoCE is implemented in and downloadable today in the OpenFabrics Enterprise Distribution (OFED) 1.5.1. Many Linux distributions, which include OFED, support a wide and rich range of middleware and application solutions such as IPC, sockets, messaging, virtualization, SAN, NAS, file systems and databases. RoCE can therefore deliver all three dimensions of unified networking on Ethernet -- IPC, NAS and SAN.
"The new RoCE specification, with a purpose-built and proven RDMA transport, provides the most efficient and light-weight transport over Layer 2 Ethernet," said Asaf Somekh, vice president of marketing at Voltaire and member of the IBTA Steering Committee. "RoCE is expected to enable the enterprise datacenter to serve more clients with a broader range of applications -- all while providing faster response times and reducing the number of servers, cables and switches required."
About the InfiniBand Trade Association
The InfiniBand Trade Association was founded in 1999 and is chartered with maintaining and furthering the InfiniBand specification. The IBTA is led by a distinguished steering committee that includes IBM, Intel, Mellanox, Oracle, QLogic, System Fabric Works and Voltaire. Other members of the IBTA represent leading enterprise IT vendors who are actively contributing to the advancement of the InfiniBand specification. The IBTA markets and promotes InfiniBand from an industry perspective through online, marketing and public relations engagements, and unites the industry through IBTA-sponsored technical events and resources. For more information on the IBTA, visit www.infinibandta.org.
Source: the InfiniBand Trade Association
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.