Visit additional Tabor Communication Publications
December 10, 2008
Foundry HPC solution provides an unprecedented combination of wire-speed networking and industry-leading scale for efficiency and value
SANTA CLARA, Calif., and DARMSTADT, Germany, Dec. 10 -- Foundry Networks, Inc., a performance and total solutions leader for end-to-end switching and routing, today announced the German-based Gesellschaft fur Schwerionenforschung (GSI) selected its high-performance computing (HPC) networking solution for its a uncompromised performance, efficiency and value. Foundry's 10 gigabit Ethernet (10GbE) switching solution for the GSI's particle accelerator network and facility allows wire-speed handling of the enormous volume of data resulting from experiments with the GSI's own revolutionary particle accelerator and with the Large Hadron Collider at CERN in Geneva.
Established in 1969, the goal of the scientific research conducted at the GSI is to understand the structure and behavior of the world that surrounds us. The GSI operates a world-leading accelerator facility for heavy-ion beams, used for fundamental research. At the GSI, over 300 scientific researchers and engineers and more than 1000 guest researchers per year, from around the world, conduct research into areas ranging from nuclear and atomic physics to plasma and materials research as well as biophysics and cancer therapy. By leveraging Foundry's performance-focused switches, the GSI is able to efficiently transport and process data from the collisions of subatomic particles on about 2,500 processing cores with computing power of 20 teraflops. The input and results data are stored on about 70 servers with a capacity of 700 terabytes.
The GSI decided on Foundry's high-performance networking solution to ensure congestion-free communication for its large group of computing and data nodes. Foundry's BigIron RX-32 Layer 2/3 backbone switch was capable of fulfilling these extreme switching capacity demands through industry-leading port density. The switch delivers maximum flexibility and throughput, with up to 1,536 ports of GbE or 128-ports of 10GbE in a single chassis at wire-speed. The research institute is also extending its existing HPC network with Foundry's FastIron Edge X and FastIron SuperX/SX Layer 2/3 switches while expanding several links to 10 gigabit Ethernet.
With the Large Hadron Collider at CERN, the GSI is leading the scientific program concerning heavy ion experiment known as 'A Large Ion Collider Experiment' (ALICE). To cope with the large quantities of data produced by this program, the GSI participated in the construction of World Wide Grid, an evolution of the World Wide Web. The GSI's datacenter is an established part of the Grid for analyzing data from ALICE. GSI staff expects that data volumes will reach approximately two million gigabytes per year, or the equivalent of about 3 million CDs.
"GSI is a high-technology institution that puts extremely challenging demands on its network," said Dr. Mathias Munch, responsible for the IT infrastructure at the GSI. "Foundry Networks' advanced switching technology is able to meet our unique performance needs. We've also been very satisfied with the support services provided by Foundry Networks and our system integrator Pan Dacom."
"We are pleased that the GSI has selected Foundry's incomparable industry-leading HPC solution to help advance important particle research," said Ken Cheng, vice president and general manager of Foundry's High-End and Service Provider Systems Business Unit. "The solution's unprecedented density, scalability and wire-speed capabilities will help maximize the power of the HPC network while allowing class-leading total cost of ownership and energy and space efficiency."
About Gesellschaft fur Schwerionenforschung
The Gesellschaft fur Schwerionenforschung (GSI) in Darmstadt is a center for fundamental research and is financed by the German state and the Land of Hessen. It is a member of the Helmholtz Association. The mission of the 1050 staff is to construct and operate accelerator facilities as well as conduct research on heavy ions. Every year, more than 1,000 guest scientists come to the GSI, which gives them access to its research facilities. GSI operates a large, in many aspects worldwide unique accelerator facility for heavy-ion beams. The research program at GSI covers a broad range of activities extending from nuclear and atomic physics to plasma and materials research to biophysics and cancer therapy. Probably the best-known results are the discovery of six new chemical elements and the development of a new type of tumor therapy using ion beams. With these and multiple other results, GSI has a leading position internationally in ion beam research. GSI physicists, together with scientists from universities and research institutes in Germany and abroad, are building a new international accelerator Facility for Antiproton and Ion Research (FAIR), planned for completion in 2015. A broad spectrum of scientific areas will be addressed at the new facility, including hitherto unsolved questions about the structure of matter and the evolution of the universe. Further information is available at http://www.gsi.de/portrait/index_e.html
About Foundry Networks
Foundry Networks, Inc. (Nasdaq:FDRY) is a leading provider of high-performance enterprise and service provider switching, routing, security and Web traffic management solutions, including Layer 2/3 LAN switches, Layer 3 Backbone switches, Layer 4-7 application switches, wireless LAN and access points, metro and core routers. Foundry's customers include the world's premier ISPs, metro service providers, and enterprises, including e-commerce sites, universities, entertainment, health and wellness, government, financial and manufacturing companies. For more information about the company and its products, call 1.888.TURBOLAN or visit www.foundrynet.com.
Source: Foundry Networks, Inc.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.