Visit additional Tabor Communication Publications
December 13, 2012
Germany is home to some serious computing power. Based on the November 2012 TOP500 list, three of the top ten European supercomputers are in Germany, including the number one, number two and number nine systems. Worldwide, these iron beauties clock in at fifth, sixth and 27th place, respectively.
The German Gauss Centre for Supercomputing (GCS) owns and operates all three machines: JUQUEEN, the 5 petaflop IBM BlueGene/Q, installed at the Jülich Supercomputing Centre (number one); SuperMUC, the 3.2 petaflop IBM iDataPlex, housed at Leibniz Supercomputing Centre (LRZ) in Garching near Munich (number two); and HERMIT, the 1 petaflop Cray XE6 located at the High Performance Computing Center Stuttgart (number 9).
JUQUEEN is the first supercomputer in Europe to reach 5 petaflops of peak compute performance – roughly equivalent to the power of 100,000 PCs. The open science system, which is part of the PRACE pan-European research infrastructure, opens up new possibilities for grand scientific discoveries.
After an upgrade expanded the system from 8 to 24 racks, JUQUEEN moved up three spots from last June's TOP500 list, while SuperMUC took two steps down. According to the Gauss Centre, HERMIT, which moved down three levels since the last list cycle, "continues to be the world's fastest supercomputer used for industrial development, research and science."
Like many of their American counterparts, the GCS directorate make it clear that while these systems are ranked based on the Linpack benchmark, their primary objectives are energy efficiency and sustained performance.
JUQUEEN, a BlueGene/Q supercomputer built on IBM POWER architecture, was designed to meet both these goals. With a performance/power ratio of approximately 2 Gigaflop/s per Watt, JUQUEEN is five times more energy efficient than its predecessor, JUGENE. A direct water cooling system that removes heat from the processors is part of the energy-efficient blueprint.
The machine's 393,216 compute cores are tasked with solving a range of difficult problems as Prof. Thomas Lippert, Director of JSC, explains:
"JUQUEEN is targeted to tackle comprehensive and complex scientific questions, called Grand Challenges," notes Lippert. "Projects from various scientific areas can profit from the system's performance, e.g. in the areas of neuroscience, computational biology or energy and climate research. It enables complicated calculations in quantum physics, which were not possible before."
The second-place European finisher SuperMUC was built with the same goals and is also exceptionally energy-friendly. Prof. Arndt Bode, director of LRZ, explains that hot-water cooling technology was key to achieving a PUE value of 1.1, a ratio that is unmatched among systems of similar performance, according to Prof. Bode.
And just as important, it was designed to be user-friendly. "We run more than 150 different applications on our system per year, thus an instruction set allowing easy adaption of user software was a core requirement on the system architecture," says Prof. Bode.
All three GCS supercomputing systems are part of the German "Tier-0" research system and contribute to the European research infrastructure, the Partnership for Advanced Computing in Europe (PRACE). In total, the Gauss Centre for Supercomputing provides more than 9 petaflops of computing power to a wide array of research projects.
The top 10 European systems on the November 2012 TOP500 list.
1) JUQUEEN - Germany
Overall position: 5
Site: Jülich Supercomputing Centre
System: IBM Blue Gene/Q
Linpack: 4.14 petaflops
Peak: 5.03 petaflops
2) SuperMUC - Germany
Overall position: 6
Site: Leibniz Supercomputing Center
System: IBM iDataPlex
Linpack: 2.90 petaflops
Peak: 3.19 petaflops
3) Fermi - Italy
Overall position: 9
System: IBM BlueGene/Q
Linpack: 1.73 petaflops
Peak: 2.10 petaflops
4) Curie thin nodes - France
Overall position: 11
System: Bull Bullx B510
Linpack: 1.36 petaflops
Peak: 1.67 petaflops
5) Blue Joule - UK
Overall position: 16
Site: Science and Technology Facilities Council - Daresbury Laboratory
Linpack: 1.21 petaflops
Peak: 1.47 petaflops
6) Tera-100 - France
Overall position: 20
Site: Commissariat a l'Energie Atomique (CEA)
System: Bull Bullx
Linpack: 1.05 petaflops
Peak: 1.25 petaflops
7) DiRAC - UK
Overall position: 23
Site: University of Edinburgh
System: IBM BlueGene/Q
Linpack: 1.04 petaflops
Peak: 1.26 petaflops
8) Lomonosov - Russia
Overall position: 26
Site: Moscow State University - Research Computing Center
System: T-Platforms T-Blade
Linpack: .902 petaflops
Peak: 1.70 petaflops
9) HERMIT - Germany
Overall position: 27
Site: HWW/Universitaet Stuttgart
System: Cray XE6
Linpack: .831 petaflops
Peak: 1.044 petaflops
10) Unnamed – France
Overall position: 31
System: IBM BlueGene/Q
Linpack: .690 petaflops
Peak: .839 petaflops
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.