Visit additional Tabor Communication Publications
April 29, 2008
SANTA CLARA, Calif., April 29 -- Woven Systems, Inc., the leading innovator of Ethernet Fabric switching solutions based on its patented vSCALE technology for datacenters and high-performance computing (HPC) clusters, today announced that the prestigious Max Planck Institute for Gravitational Physics (Albert Einstein Institute, Hannover, Germany) is using Woven's EFX 1000 10 Gigabit Ethernet (10 GE) Fabric Switch and TRX 100 Ethernet Switch in a large HPC cluster to search for gravitational waves predicted by Albert Einstein's General Theory of Relativity. The Woven's Ethernet Fabric provides access to more than one petabyte of data supplied by a worldwide network of gravitational wave detectors. The data is distributed to compute cluster nodes via the Woven all-Ethernet solution.
"Gravitational wave research is one of the most exciting fields of science. It will open a complete new window to the universe, and requires very large-scale and sophisticated computing technologies. Our research is pushing the state-of-the-art in computational analysis, and Woven's innovative technology gives us a higher-performing and more flexible 10 GE network than traditional HPC switch suppliers," says Professor Bruce Allen, director of the Institute. "The price/performance and flexibility of the Woven 10 Gigabit Ethernet Fabric is unmatched by any other switching solution we could find. This allows us to get more computing cycles for our money. It also makes it easier to evolve and upgrade the system in the future, as our needs and hardware base change."
During the Institute's extensive acceptance testing, the Woven Ethernet Fabric achieved over 30 Teraflops of performance using the HPC Linpack benchmark, which places the application on par with the Top 50 of the www.Top500.org list from November 2007. Based on Woven's vSCALE Dynamic Congestion Avoidance capability in a non-blocking 10 Gigabit Ethernet Fabric, the cluster was able to reach 64 percent of the theoretical peak efficiency possible. "The HPC Linpack experts we consulted tell us that they have never seen such high Gigabit Ethernet efficiencies on such a large cluster," Professor Allen adds.
The Institute's datacenter, located in Hannover, Germany, is used for research in the division of Observational Relativity and Cosmology. The most active research area develops and implements data analysis algorithms for evidence of the gravitational waves predicted by Einstein's General Theory of Relativity. The Institute is part of an international collaboration that shares data from the latest generation of sensitive detectors based in the USA (LIGO), Italy (VIRGO), and Germany (GEO). The Institute also helps to operate the distributed computing project Einstein@Home, which searches for gravitational wave signals from pulsars.
Max Planck's fully non-blocking 10 GE Woven Fabric consists of a single EFX 1000 10 Gigabit Ethernet Fabric Switch, configured with 144 10 GE ports, and 34 TRX 100 "Top-of-Rack" Ethernet Switches. Each 48-port TRX 100 provides Ethernet connectivity for individual servers with Intel Quad-Core processors. Each server has a dedicated 1 Gbps Ethernet port on the TRX 100, which has four separate 10 GE uplinks to the EFX 1000 at the core of Woven's 10 Gigabit Ethernet Fabric. The EFX 1000 also provides Ethernet connectivity to a large storage system housing a petabyte of measurement data. Collectively the system has a storage capacity in excess of 1,100 Terabytes, and more than 5,000 CPU cores.
"The Max Planck Institute showcases Woven's unique 10 Gigabit Ethernet Fabric technology, which is now part of this important research at the leading edge of astronomy and cosmology," says Joe Ammirato, Woven's vice president of marketing. "The EFX 1000 10 Gigabit Ethernet Fabric Switch was designed specifically for advanced projects like this, which require non-blocking 10 GE throughput with ultra-low latency and jitter on a large scale."
About the Max Planck Institute for Gravitational Physics
The Max Planck Institute for Gravitational Physics in Germany, also known as the Albert Einstein Institute (www.aei.mpg.de/english/), is the world's largest research institute devoted to studying gravitational physics. Research at the Institute is aimed at investigating Einstein's General Theory of Relativity through the fields of mathematics, quantum gravity, astrophysical relativity and gravitational wave astronomy. The Institute, which was founded in 1995, has a theoretical branch located in Potsdam and an experimental branch located in Hannover.
About Professor Bruce Allen
Bruce Allen is a physicist and director of the Max Planck Institute for Gravitational Physics in Hannover Germany and a professor at the University of Wisconsin-Milwaukee. He also leads the Einstein@Home project for the LIGO Scientific Collaboration. Professor Allen has a B.S. in physics from Massachusetts Institute of Technology, and a PhD. in gravitation and cosmology from Cambridge University, England (where his research advisor was Professor Stephen Hawking). He did postdoctoral research work at the University of California at Santa Barbara, Tufts University, and the Observatoire de Paris in Meudon. Before joining the Max Planck Institute, Professor Allen was on the faculty at Tufts University and the University of Wisconsin-Milwaukee. He has done research work on models of the very early universe studying inflationary cosmology and cosmic strings. Allen currently leads a research group working on the detection of gravitational waves.
About Woven Systems
Woven Systems is an innovative network infrastructure provider that delivers the industry's first scalable 10 Gigabit Ethernet Fabric switching solutions for datacenters. Fully compliant with Ethernet standards, Woven's patented vSCALE packet processing technology featuring Dynamic Congestion Avoidance helps customers optimize the network performance and efficiency for their increasingly strategic datacenter operations. For more information, visit Woven Systems on the Web at www.wovensystems.com.
Source: Woven Systems
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.