Visit additional Tabor Communication Publications
July 27, 2012
BERLIN, Germany, July 20 -- The Leibniz Supercomputing Centre in Garching near Munich (LRZ), one of GCS’s three national supercomputing centres, today inaugurated SuperMUC, Europe’s most powerful supercomputer to date. With SuperMUC, which offers a peak performance of about 3 Petaflops (3 quadrillion floating point operations per second), LRZ falls into line with the other two members of the German Gauss Centre for Supercomputing, HLRS Stuttgart and JSC Jülich, in being equipped with a supercomputing infrastructure in the petascale performance range. By adding SuperMUC to its HPC system platform, GCS now provides the largest and most powerful supercomputer infrastructure in Europe for research, scientific, and industrial tasks.
In a festive ceremony celebrating also the 50th anniversary of LRZ, Prof. Annette Schavan, Federal Minister of Education and Research, and State-Minister Wolfgang Heubisch together with Prof. Karl-Heinz Hoffmann, President of the Bavarian Academy of Sciences and Humanities, and Prof. Arndt Bode, Director of LRZ, officially released LRZ’s new HPC system SuperMUC for its manifold use by researchers, scientists, and users of the industry. As Europe’s fastest HPC system, SuperMUC ranks 4th on the noted TOP500 list (released 18th June 2012), which enumerates the most powerful supercomputers in the world.
“Today is a great day for science in all Europe”, explained Minister Schavan, who was proud to see the last of the three member centres of the German Gauss Centre for Supercomputing being equipped with a supercomputer delivering petascale performance. “Just like the HPC systems in Stuttgart and Jülich, LRZ’s new supercomputer will be available not only for German scientists but for scientists from all Europe. With SuperMUC, we underscore our ambition to remain on the forefront of being an attractive and strong partner for supercomputing in all Europe.”
Prof. Heinz-Gerd Hegering, Chairman of GCS, added: “The commissioning of SuperMUC marks a milestone in GCS’s mission to adopt a leading role in European high performance computing. In a joint effort by the German Federal Ministry of Education and Research (BMBF) and the three states hosting our national HPC centres, the Free State of Bavaria, the states of Baden-Württemberg and North Rhine-Westphalia, we set out to provide systems of petascale performance to the German science and research community. We delivered! GCS now offers the largest and most powerful supercomputer infrastructure in Europe and a vastrange of industrial and research activities in various disciplines will benefit from it.” As former
Director of LRZ, GCS Chairman Prof. Hegering was given the honor to deliver the
ceremonial address at the day’s festive event in Garching. Being a member of the Munich/Garching science institution since 1968, i. e. an impressive time span of 44 years, no one else but Prof. Hegering was presumably better qualified to deliver a short yet intense and highly interesting review on LRZ’s 50 years of history.
LRZ’s new supercomputer SuperMUC, a System X iDataPlex from IBM consisting of 155,000 cores and offering 330 Terabyte of main memory, is not only fast but also extremely energy efficient. A revolutionary new form of hot-water cooling technology helps achieve a PUE* value of 1.1, a ratio currently unmatched by any x86-system of comparable performance. On top of these facts, LRZ’s flagship computer excels in yet one more subject: The system has been designed as general purpose HPC system, thereby allowing an exceptionally versatile deployment. “SuperMUC is extraordinary user friendly”, stresses Professor Dr. Arndt Bode, Director of LRZ. “We run more than 100 different applications on our system per year, thus an instruction set allowing easy adaption of user software was a core requirement on the system architecture.” SuperMUC is being used for a wide spectrum of science and research tasks, ranging from medical and engineering & energy applications to astrophysics.
Being a supercomputer at the highest performance level, SuperMUC qualifies as a „Tier-0“ system in the European research infrastructure offered through the Partnership for Advanced Computing in Europe (PRACE), which the Gauss Centre for Supercomputing and thus LRZ, as one of the three GCS centres, is a hosting member of. Through PRACE, these Tier-0 systems are made available for large-scale scientific projects to users in Europe and beyond. In the latest PRACE Regular Call for Proposals (April 2012), SuperMUC was already included and 200 million of SuperMUC’s core hours (out of the 1.134 million core hours for the entire Call) were allocated to top-level research projects in three disciplines: Astrophysics, Engineering & Energy, and Chemistry & Materials.
Both SuperMUC and the associated extension of the LRZ buildings, which include a new state-of-the-art visualization centre, were co-funded by BMBF (German Federal Ministry of Education and Research) and the Free State of Bavaria. The operational costs are covered exclusively by the Free State of Bavaria. Accompanying projects are funded by the European Union, the BMBF, as well as via additional third-party funds.
The Gauss Centre for Supercomputing (GCS) consolidates the three national supercomputing centres HLRS (High Performance Computing Center Stuttgart), JSC (Jülich Supercomputing Centre), and LRZ (Leibniz Supercomputing Centre, Garching) into Germany’s Tier-0 Supercomputing institution. Concertedly, the three centres provide the largest and most powerful supercomputer infrastructure in Europe to serve a wide range of industrial and research activities in various disciplines. They also provide top-class training and education for the national as well as the European High Performance Computing (HPC) community. GCS is the German member of PRACE (Partnership for Advance Computing in Europe), an international non-profit association consisting of 24 member countries, whose representative organizations create a pan-European supercomputing infrastructure, providing access to computing and data management resources and services for large-scale scientific and engineering applications at the highest performance level.
GCS has its headquarters in Berlin/Germany.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.