Visit additional Tabor Communication Publications
November 14, 2012
SALT LAKE CITY, Nov. 14 – At SC12, Gnodal Limited, the high-performance data center networking company, announced the details of the Gnodal Fabric implementation based on a multi-switch, core data center deployment by Global Geophysical Services, Inc.
Global Geophysical Services, Inc. ("Global") is a leading provider of high-density Reservoir Grade 3D (RG3D) seismic solutions. The company's data-processing operations create detailed subsurface images of the Earth, using data collected with surface geophone arrays over an area of exploration interest. That data enables Global and their clients to extract extremely detailed information on rock properties that can drive exploration, fluid prediction, reservoir development and well placement.
To conduct this extremely complex imaging and analysis, Global deployed several of Gnodal's switches in their data center. This enabled the company to achieve its goals for low latency, throughput and reliability, replacing its central core switch with a distributed Gnodal Fabric solution that provides scalability and easy upgrade paths.
"After implementing a Gnodal-based fabric, we determined that previous bottlenecks had been alleviated," explained Bill Menger, head of High Performance Computing for Global. "By alleviating the congestion present in the central core solution, we were then in a position to invest in state-of-the-art SSD storage and significantly increase production capability."
Global's data center comprises more than 4,000 compute cores in 320 servers, which are connected to a series of 1/10GbE Top-of-Rack switches. Most compute servers connect into these switches, and in-turn, these switches connect into the data center network powered by Gnodal Fabric. Several servers are directly connected at 10 or 20GbE to support high-throughput applications.
Global replaced its traditional chassis-based core with a distributed core comprised of Gnodal switches, which allows for additional flexibility while maintaining performance. By applying Gnodal's single-pane management capability, Global is able to manage multiple switches as one integrated fabric. Gnodal's applied technique in scaling across multiple switches with unified fabric links that are actively coupled with anti-congestion and dynamic load-balancing, enables the distributed core to perform at the required levels. This distributed core model also allows for a "pay-as-you-grow" practice, rather than a massive and costly up-front capital acquisition expense.
"Global's experience is a great example of how the Gnodal Fabric prevents network congestion in storage-intensive HPC environments, and takes advantage of today's increased storage performance by adaptively load-balancing flows between switches without realizing a performance hit," said Atchison Frazer, Gnodal's Chief Marketing Officer. "Along with delivering the predictability, low latency and performance, the Gnodal multi-switch environment is managed as a large virtual switch, enabling operations across all ports to be orchestrated from one single point and lowering administration burden."
The Gnodal ASIC Ethernet switch architecture features a congestion-aware performance and workload engine that allows for ultra-low latency transmission, while utilizing a dynamic, fully adaptive load-balancing mechanism to equitably arbitrate a pre-emptive pathway for large data sets, high-computational applications and massive storage demands prevalent in HPC and Big Data environments. The 72-port 40GbE "fabric-in-a-box" GS0072 solution extends Gnodal's leadership in port-density ToR solutions and won the best-in-class award for networking at Interop 2012.
Gnodal's high-performance network fabric delivers industry leading speed to help reduce latency. Gnodal's highest port-density 1U and 2U ToR switches are ideally suited for deployment within co-location environments and enterprise data centers. On ingress into GS-Series switching, the initial latency is sub-150 nanoseconds (store/forward) with each subsequent Gnodal switch added to the fabric incurring only 66 nanoseconds of additional latency (cut-through).
Source: Gnodal Ltd.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.