At ISC this year, in addition to multiple presentations on the forward momentum for European HPC, there was a special event dedicated to the German HPC roadmap. On Tuesday Dr. Michael Resch, chairman of the Guass Centre for Supercomputing (GCS) and director for HLRS, provided insight into Germany’s position in the European HPC landscape.
Having just celebrated its ten-year anniversary in 2017, GCS deals with a unique hand of assets and challenges that come with having the support of one federal plus three state governments, and the abundance of decision makers that come with them.
The organization is charged with the task of advancing German academic and government supercomputing while meeting the needs of industrial end-users. It encompasses three centers, whose procurements are driven by the requirements and demands of users (with little interest in or affection for peak speed) that bridge the gap between the bleeding edge and everyday usage.
To meet the needs of German and European HPC alike, the nation’s three supercomputing centers have each been tasked a purpose to span the gamut of demand, from groundbreaking research to applied science. And with each center’s focus comes with it a unique education and hardware strategy.
JSC – Jülich Supercomputing Centre at Forschungszentrum Jülich
Poised on the bleeding edge of HPC, JSC is known for its research and work with applications that can exploit most recent technology, often through experimental systems. Most notably, Jülich is a part of the 23-country effort to bring together neuroscientists, physicians and computer scientists to simulate the complete human brain within the next ten years using a future supercomputer.
LRZ – Leibniz Superomputing Centre of the Bavarian Academy of Sciences in Garching near Munich
Known for its emphasis on geophysics and astrophysics, LRZ is poised as a general-purpose HPC center. With a focus on more ‘stable,’ less bleeding edge systems, the Munich center is meant to act as a bridge between HPC extremes while still exploring innovations such as cooling technologies.
HLRS – High-Performance Computing Center Stuttgart
Catering to the industry end users with a focus on engineering and stable hardware requirements, HLRS represents the far end of the GCS spectrum. The purpose here, Resch explains, is not bleeding edge research, but approaching modeling and simulation as a production activity. HLRS operates the Cray XC40 “Hazel Hen” system, currently number 27 on the Top500 with 5.7 Linpack petaflops.
Together, GCS’s three centers are meant to harmonize and appeal to decision makers in government by appealing both to SMEs and broader European HPC development alike. And while Resch did not delve into the nuances of funding, joking that it would be too difficult for anyone but accountants to explain, he noted that the overall funding schema is driven by projects, which were PetaGCS, SiVeGCS and InHPC.
Funding for the first phase of GCS came via PetaGCS, a project that spanned 2008 to 2019 with a €400,000,000 budget. Designed to cater to three centers, PetaGCS was designed to bring each center two petascale systems over lifetime of project.
Next is SiVeGCS, which was introduced in 2017 and runs through 2025 with a €460,000,000 budget. SiVeGCS stepped beyond the purview of PetaGCS to address support for users with simplified, unified access to resources, and improved training.
Finally, InHPC was issued with a €15,000,000 budget from 2017 to 2021 with the goal to improve networking so that any user no matter their location can run jobs on any supercomputer in the network with the help of 100GB and 200GB connections. At this stage, InHPC has already connected the centers and have brought in new systems, with JSC’s Juwels system now online, and LRZ’s next-generation SuperMUC-NG expected by year end.
New GCS Systems Through SiVeGCS
Juwels (short for Jülich Wizard for European Leadership Science) debuted on the just-updated Top500 list at number 23. Backed by European supplier Atos (with the support of Intel) and leveraging a GPFS file system, Juwels phase one system delivers 9.9 petaflops peak performance and 6.2 Linpack petaflops. 48 Nvidia V100 GPU nodes, not included in the Top500 run, bring the system up to 12 petaflops. Juwels represents the first module of a scalable machine, with an additional module slated to deliver an extra 50+ petaflops in 2019.
In Munich at LRZ, the SuperMUC-NG system supplied by Intel/Lenovo is expected to deliver 26.7 petaflops peak with 6,400 Lenovo ThinkSystem SD 650 direct-water-cooled computing nodes. And a final system earmarked for HLRS in Stuttgart is currently in procurement and expected at the end of 2019.
Where GCS Goes From Here
Moving forward, GCS’s next major goal is focused around the global exascale race, beginning with a installing pre-exascale system in 2020, and the inclusion of exascale solutions in all three centers by 2023 to shore the gap between high-end and everyday use.
And with regard to the race itself, Resch says that the hope is to establish Jülich as the home of Europe’s first exascale system.