Visit additional Tabor Communication Publications
November 15, 2012
SALT LAKE CITY, UT, Nov. 15 – Today at SC12, Gnodal Limited, the high-performance data center networking company, announced that its Gnodal GS7200 Switch supported Intel's performance testing of ESI Group's PAM-CRASH physics-based simulation software. The test demonstrates that iWARP (RDMA over Ethernet) is a viable alternative to proprietary fabrics, providing the opportunity for High Performance Computing (HPC) operators to benefit from its cost-efficiency and flexibility.
The Gnodal GS7200 Switch was used in a simulation of a car-to-car crash to compare performance of iWARP technology versus InfiniBand. The tests conducted by researchers at Intel Corporation demonstrated improved results for 10-Gigabit Ethernet (GbE) networking with iWARP technology, confirming that the advantages of iWARP over proprietary technologies such as Infiniband are available without significant performance penalty. This heralds the opportunity to support scientific and engineering applications within a converged network environment, providing the necessary low-latency requirements through a combination of RDMA and the industry leading low-latency of the Gnodal GS-Series.
"Gnodal GS-Series Switches, as demonstrated in the benchmark, are able to support the requirements of iWARP for typical ISV applications," said Dr. John Taylor, Gnodal Vice President of Technical Marketing. "The Gnodal Ethernet Fabric not only allows all paths to be used within the network, it also dictates 'fairness' in their use, ensuring that applications are not starved of resources."
Gnodal engaged with Intel's LAN Access Division to test a number of ISV codes within a converged network setting, using NetEffect Ethernet Server Cluster Adapters from Intel that support RDMA, and the Gnodal GS-Series switches that use highly efficient implementations of loss-less Ethernet standards. This transport offered precise support of the OpenFabrics Enterprise Distribution (OFED™) environment, obviating particular kernel operations, coupled with unique support for TCP/IP flows. This has particular advantages in providing the necessary inter-server Message Passing Interface (MPI) communications, as well as standard networking operations.
The Gnodal GS7200 Switch enabled Intel's benchmarking around iWARP -- the standard RDMA protocol for Ethernet -- and has proven that the typical use-case for Infiniband supporting moderate scale-out HPC is now possible with High-Speed Ethernet. This implementation enables users the performance value and benefits of the Ethernet standard in terms of integration, management, and access to a richer set of third-party peripherals -- all of which decrease the total cost of ownership of HPC resources.
"The PAM-CRASH test proves that the Gnodal ASIC Ethernet architecture with High-Speed Ethernet, built around low-overhead RDMA Ethernet protocols and Ethernet Fabrics that provide low-latency at scale with congestion avoidance built-in, can sustain a converged network strategy in traditional HPC environment," added Dr. Taylor of Gnodal.
To learn more about the iWARP performance test and the Gnodal GS7200 Switch, visit the Gnodal Booth #4818 at SC12 or view this white paper: www.intel.com/content/dam/www/public/us/en/documents/white-papers/ethernet-pam-crash-whitepaper.pdf
The Gnodal ASIC Ethernet switch architecture features a congestion-aware performance and workload engine that allows for ultra-low latency transmission while utilizing a dynamic, fully adaptive load-balancing mechanism to arbitrate equitably a pre-emptive pathway for large data-sets, high-computational applications and massive storage demands prevalent in HPC and Big Data environments. The 72-port, 40-GbE, "fabric-in-a-box" GS0072 solution extends Gnodal leadership in port density ToR solutions and won the best-in-class award for networking at Interop 2012 (www.bestofinterop.com/winners).
Gnodal high-performance network fabrics deliver industry leading speed to help reduce latencies. Gnodal highest port density 1U and 2U ToR switches are ideally suited for deployment within co-location environments and enterprise data centers. On ingress into GS-Series switching, the initial latency is sub-150 nanoseconds (store/forward) with each subsequent Gnodal switch added to the fabric incurring only 66 nanoseconds of additional latency (cut-through).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.