Visit additional Tabor Communication Publications
December 13, 2010
Innovative hot-water cooled supercomputer to deliver up to three petaflops of peak performance when online in 2012
GARCHING/MUNICH/STUTTGART, Germany, Dec. 13 -- The Leibniz Supercomputing Centre (LRZ) in Garching, Germany, has signed a contract with IBM to develop and build a new general purpose supercomputer with next generation Intel Xeon processors to support advanced scientific research. The system will use innovative hot water cooling technology to consume 40 percent less energy than a comparable air-cooled machine.
Named "SuperMUC," the new system is part of the Partnership for Advanced Computing in Europe (PRACE) HPC infrastructure for researchers and industrial institutions throughout Europe. It will enable LRZ's scientific community to test theories, design experiments and predict outcomes as never before. The supercomputer will be jointly funded by the German federal government and the state of Bavaria.
Within the LRZ, a wide spectrum of research areas is being handled, from cosmology and the origins of the universe, through to seismology and the prediction of earthquake tremors. In order to make its performance available to a broad range of users with diverse applications, LRZ will build the general purpose system based on the IBM System x iDataPlex with more than 14,000 next generation Intel Xeon processors. SuperMUC will achieve peak performance of up to three petaflops. This is equivalent to the work of more than 110,000 PCs. In other words: three billion people each using a pocket calculator would have to perform one million operations per second and per person to be equivalent to SuperMUC's performance.
SuperMUC will use innovative water cooling to eliminate the need for conventional datacentre cooling systems. Up to 50 percent of an average air-cooled datacentre's energy consumption and carbon footprint today is not caused by computing, but by powering the necessary cooling systems to keep the server from overheating. SuperMUC combines water cooling -- which typically removes heat 4,000 times more efficiently than air -- with energy efficient Intel processors and application oriented, dynamic systems management to reduce energy consumption even further.
"SuperMUC will provide previously unattainable energy efficiency along with peak performance by exploiting the massive parallelism of Intel's multicore processors and leveraging the innovative hot water cooling technology pioneered by IBM. This approach will allow the industry to develop ever more powerful supercomputers while keeping energy use in check," said Prof. Dr. Arndt Bode, chairman of the board of directors of LRZ.
As HPC systems consistently raise performance, it is essential that improvements in energy efficiency keep pace. Against this background, LRZ, IBM and Intel are creating a new more sustainable approach to HPC. IBM is contributing its experience in the development and the delivery of high-end supercomputing systems. In particular, the IBM development team in Boeblingen is contributing its deep competence in the area of energy efficiency, which has been proven in comparable projects such as the IBM Aquasar supercomputer which was developed by the IBM labs in Boeblingen and Zurich.
Drawing on its industry-leading HPC experience, Intel is collaborating with IBM on the energy efficient design as well as providing the very powerful processors that will drive the machine. The workloads of the SuperMUC will be handled by the high performance Intel processors in their entirety, without the use of special accelerators. LRZ, for its part, is bringing its long standing experience in the operations and the exploitation of high-end supercomputing systems.
The LRZ is the computer center for Munich's universities and for the Bavarian Academy of Sciences and Humanities. It takes care of the scientific data network in Munich, it offers a variety of data services, and it provides high-end computing facilities for the scientific community in Germany and beyond.
"SuperMUC is part of the tradition of supercomputers at the Bavarian Academy of the sciences, delivering excellent results for a broad spectrum of scientific applications," said Prof. Dr. Dietmar Willoweit, president of the Bavarian Academy of Sciences and the Humanities. "We are very excited to be teaming with IBM, Intel and other industry leaders to continue the Academy's legacy of scientific excellence and leadership."
SuperMUC is the largest high performance computing system that IBM and Intel have collaborated on.
"With the new supercomputer, the German and European research community is getting a push to be on the forefront of international competition," said Martin Jetter, chairman of the board, IBM Germany. "Continued investment in research and development will allow us to see top research results in the future, in return. I am especially pleased by the fact that the new system is being designed and developed by experts from the IBM R&D center in Boeblingen in collaboration with their colleagues in the US and Asia."
Dr. Rajeeb Hazra, general manager of high performance computing, Intel, said: "Intel's unique partnership with IBM, together with our next generation microprocessor technology, has led to the development of the most innovative, capable and energy efficient supercomputing solution. We are thrilled to be part of this collaboration with IBM and LRZ and believe that it will set a new standard for general purpose academic and government supercomputing installations."
For more information about IBM (NYSE: IBM), visit www.ibm.com.
Source: IBM; Leibniz Supercomputing Centre
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.