Visit additional Tabor Communication Publications
December 15, 2010
GARCHING/MÜNCHEN/STUTTGART, Germany, Dec. 15 -- The Partnership for Advanced Computing in Europe, PRACE, welcomed the upcoming availability of the third Tier-0 system for the PRACE Research Infrastructure. The contract about the next supercomputer at the Leibniz Supercomputing Centre, Germany (Leibniz-Rechenzentrum, LRZ) -- as part of the Gauss Centre for Supercomputing (GCS) -- "SuperMUC" was signed on December 13th in the presence of the Minister of the State of Bavaria Dr. Wolfgang Heubisch, Prof. Dr. Arndt Bode, chairman of the board of directors of the Leibniz Supercomputing Centre of the Bavarian Academy of Sciences and the Humanities, Germany, and Martin Jetter, chairman of the board, IBM Germany.
This is the third world-class Tier-0 supercomputer announced by PRACE. SuperMUC will start operation in mid-2012 and be one of the fastest general purpose supercomputers in the world with 3 petaflop/s peak performance, 320 terabytes of main memory and 12 petabytes of permanent storage. SuperMUC has a new cooling concept and will be very energy efficient.
"With SuperMUC as the third Tier-0 system, the PRACE Infrastructure sets another milestone in providing the European science community with world-class supercomputing resources. With this system, PRACE will increase its overall Tier-0 capability by more than 5 times since the creation if the infrastructure in 2010," said Dr. Thomas Eickermann, PRACE project manager (Forschungzentrum Jülich, Germany).
Although SuperMUC will be comprised of more than 110,000 processor cores, stable operation and excellent scaling are expected due to its architecture. Scientists will be able to use their established programming models without changes on this new supercomputer. Using the PRACE Research Infrastructure, SuperMUC offers new possibilities for scientists from 20 European PRACE member states.
The new cooling concept is revolutionary. Active components like processors and memory are directly cooled with water that can have a temperature of up to 45 degrees Celsius. This "High Temperature Liquid Cooling" and very innovative system software enable an only very moderate increase in energy needed to operate this system. In addition, all LRZ buildings will be heated reusing this energy.
"SuperMUC will deliver previously unachieved energy efficiency for green computing as well as outstanding compute performance by employing the massive parallelism of general purpose multi core processors," stated Prof. Bode, head of LRZ.
"With the new supercomputer, the German and European research community is getting a push to be on the forefront of international competition," said Martin Jetter, chairman of the board, IBM Germany.
"Continued investment in research and development will allow us to see top research results in the future, in return. I am especially pleased by the fact that the new system is being designed and developed by experts from the IBM R&D center in Boeblingen in collaboration with their colleagues in the US and Asia," Jetter continues.
PRACE Research Infrastructure's supercomputers are used for research in all fields of science like simulation of the evolution of the universe under the influence of dark matter, modeling the earth's interior, propagation of earthquakes, computing the dynamical properties of miscellaneous systems from engineering and nature, down to biological systems and medical scenarios.
The investment costs for SuperMUC -- including operational costs and power for five to six years -- are 83 million Euros; they are funded jointly by the State of Bavaria and Germany -- as are the additional 50 million Euros for the extension of LRZ's buildings. In addition, Bavaria will support accompanying projects in the science of high performance computing.
Bavaria's Minister of Science Dr. Wolfgang Heubisch named SuperMUC an investment into the future: "Powerful computers and software are today's key for scientific and technological competitiveness. With this new supercomputer, the Leibniz Supercomputing Centre in Garching will be a pioneer in energetically optimized computer technology."
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.