ExaHyPE Research Project Developing Software for Exascale-Class Supercomputers

October 28, 2015

MUNICH, Germany, Oct. 28 — A billion billion, i.e. 1018 computer operations per second (1 exaflop/s) is the level of performance that the next generation of supercomputers should be able to deliver. However, programming such supercomputers is a challenge. In October 2015, the European Commission began funding ”ExaHyPE”, an international project coordinated at the Technische Universität München (TUM), which seeks to establish the algorithmic foundations for exascale supercomputers in the next four years. The aim is to develop novel software, initially for simulations in geophysics and astrophysics, which will be published as open-source software for further use. The grant totals EUR 2.8 million.

Computer-based simulations drive progress in the field of science. In addition to theory and experiments, simulations have long since been crucial for acquiring knowledge and insight. Supercomputers allow for the computation of increasingly complex and precise models. The EU ExaHyPE (“An Exascale Hyperbolic PDE Engine”) project has an interdisciplinary team of researchers from seven institutions in Germany, Italy, the United Kingdom, and Russia, and integrates well into Europe’s strategy for developing an exascale-class supercomputer by 2020. In order to be able to leverage the incredible processing power of exascale systems for correspondingly comprehensive simulation tasks, the entire supercomputing infrastructure, including the software, must be prepared for such systems.

Powerful, flexible and energy-efficient

Supercomputing of the future poses immense challenges for the ExaHyPE researchers. Currently, the biggest obstacle for achieving exascale computing is energy consumption. Today, the world’s fastest supercomputers – Tianhe-2 (China), Titan (US), Sequoia (US) and the K Computer (Japan) – operate in the petaflop/s range (1015 computer operations per second) and require between 8 and 18 megawatts (source: www.top500.org), with the energy costs amounting to about US$ 1 million per megawatt and year. “Based on current technologies, an exascale computer with a demand of close to 70 megawatts would represent both a financial and an infrastructural challenge,” explains ExaHyPE coordinator Professor Michael Bader of TUM. “That is why simulation software developed as part of the ExaHyPE project will be consistently designed for the requirements of future energy-efficient hardware.”

On the hardware side, an extreme parallelization is to be expected. “By 2020 supercomputers will encompass hundreds of millions processor cores,” Bader adds. “At the same time, the hardware – which is pushed to its physical limits to achieve the further increase in performance and still must run as energy efficiently as possible – will increasingly tend to be plagued with interruptions and fluctuating performance curves. ExaHyPE will consequently examine the dynamic distribution of computer operations to processor cores – even if these fail while performing calculations.”

Another objective is to reduce the internal-hardware communication simultaneously with the parallelization. Each data transfer is implemented at the expense of energy consumption. In ten years, supercomputers will be able to run calculations 1000 times faster than today. However, memory access time will fail to evolve at the same rate. The used algorithms should be inherently memory-efficient and require as little data transfer as possible to ensure fast, energy-efficient computer operations.

In order to take full advantage of the smallest possible amount of memory, the consortium is developing new scalable algorithms, which dynamically increase the resolution of simulations, i.e. the implemented numerical observation points, wherever the computer simulation needs – and only there. As a result, scientists will be able to limit the necessary computer operations to a minimum while simultaneously achieving the greatest possible accuracy for the simulation.

Two application scenarios: Earthquakes and gamma ray explosions

The ExaHyPE researchers will prepare the new algorithms based on two application scenarios taken from geophysics (earthquakes) and astrophysics (gamma ray explosions). Earthquakes cannot be predicted. However, simulations carried out on exascale supercomputers could help us to better assess the risk of aftershocks. Regional earthquake simulations promise to provide a better understanding of what takes place during large-scale earthquakes and their aftershocks. In the field of astrophysics, ExaHyPE systems will simulate orbiting neutron stars which are merging. Such systems are not only suspected of being the greatest source of gravitational waves but could also be the cause of ”gamma ray explosions”. Exascale simulations should allow us to study these long-standing mysteries of astrophysics and see them in a new light.

In spite of the two precisely defined areas of application, the researchers want to keep the new algorithms as general as possible so that they may also be used in other disciplines after making corresponding adaptations. Examples could include the simulation of climate and weather phenomena, the complex flow and combustion processes in engineering sciences, or even the forecasting of natural catastrophes like tsunamis or floods. “Our objective is to ensure that medium-size, interdisciplinary research teams are able to adapt the simulation software for their specific purposes within a year of its release,” Bader says. To guarantee a rapid dissemination of the new technology, the consortium will release it as open source software.

Comprehensive expertise through international, interdisciplinary cooperation

The ExaHyPE project objectives call for an intensive cooperation of experts across many disciplines and country borders. On the German side, the consortium includes the Technische Universität München (Prof. Dr. Michael Bader, Informatics Department, High Performance Computing), the Frankfurt Institute for Advanced Studies (Prof. Dr. Luciano Rezzolla, Institute for Theoretical Physics, Goethe Universität Frankfurt), the Ludwig-Maximilians-Universität München (Dr. Alice-Agnes Gabriel and Prof. Dr. Heiner Igel, Department of Earth and Environmental Sciences), and the Bavarian Research Alliance (Dipl.-Ing. Robert Iberl, Unit for Information & Communication Technologies). Italy is represented by Università degli Studi di Trento (Prof. Dr. Michael Dumbser, Dipartimento di Ingegneria Civile Ambientale e Meccanica) and the United Kingdom by Durham University (Dr. Tobias Weinzierl, School of Engineering and Computing Sciences). The consortium is supplemented by the Russian supercomputer vendor ZAO RSC Technologies (Alexander Moskovsky, CEO).

About the Bavarian Research Alliance (BayFOR)

The Bavarian Research Alliance GmbH provided the ExaHyPE consortium with extensive support during the application phase and assisted in the drafting of the contract with the European Commission. In the current project, BayFOR will assume responsibility for project management and the dissemination of scientific results. BayFOR is an organization whose purpose is to promote Bavaria as a centre for science and innovation within the European Research Area. It supports and advises Bavarian scientists and stakeholders from the private sector on European research, development and innovation funds. The focus is directed at the Framework Programme for Research and Innovation “Horizon 2020”. As a partner in the network for SMEs “Enterprise Europe Network” (EEN), BayFOR provides specific advice for SMEs which are interested in EU research and innovation projects. BayFOR is a partner institution in the Bavarian “Haus der Forschung” (www.hausderforschung.bayern.de/en) and is supported by the Bavarian State Ministry of Education, Science and the Arts. For further information please visit www.bayfor.org/english.

Source: BayFOR

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

The New Scalability

April 20, 2021

HPC is all about scalability. The most powerful systems. The biggest data sets. The most cores, the most bytes, the most flops, the most bandwidth. HPC scales! Notwithstanding a few recurring arguments over the last twenty years about scaling up versus scaling out, the definition of scalability... Read more…

Supercomputer-Powered Climate Model Makes Startling Sea Level Rise Prediction

April 19, 2021

The climate science community is tasked with striking a difficult balance: inspiring precisely the amount of alarm commensurate to the climate crisis. Make estimates that are too conservative, and the public might not re Read more…

San Diego Supercomputer Center Opens ‘Expanse’ to Industry Users

April 15, 2021

When San Diego Supercomputer Center (SDSC) at the University of California San Diego was getting ready to deploy its flagship Expanse supercomputer for the large research community it supports, it also sought to optimize Read more…

GTC21: Dell Building Cloud Native Supercomputers at U Cambridge and Durham

April 14, 2021

In conjunction with GTC21, Dell Technologies today announced new supercomputers at universities across DiRAC (Distributed Research utilizing Advanced Computing) in the UK with plans to explore use of Nvidia BlueField DPU technology. The University of Cambridge will expand... Read more…

The Role and Potential of CPUs in Deep Learning

April 14, 2021

Deep learning (DL) applications have unique architectural characteristics and efficiency requirements. Hence, the choice of computing system has a profound impact on how large a piece of the DL pie a user can finally enj Read more…

AWS Solution Channel

Research computing with RONIN on AWS

To allow more visibility into and management of Amazon Web Services (AWS) resources and expenses and minimize the cloud skills training required to operate these resources, AWS Partner RONIN created the RONIN research computing platform. Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

The New Scalability

April 20, 2021

HPC is all about scalability. The most powerful systems. The biggest data sets. The most cores, the most bytes, the most flops, the most bandwidth. HPC scales! Notwithstanding a few recurring arguments over the last twenty years about scaling up versus scaling out, the definition of scalability... Read more…

San Diego Supercomputer Center Opens ‘Expanse’ to Industry Users

April 15, 2021

When San Diego Supercomputer Center (SDSC) at the University of California San Diego was getting ready to deploy its flagship Expanse supercomputer for the larg Read more…

GTC21: Dell Building Cloud Native Supercomputers at U Cambridge and Durham

April 14, 2021

In conjunction with GTC21, Dell Technologies today announced new supercomputers at universities across DiRAC (Distributed Research utilizing Advanced Computing) in the UK with plans to explore use of Nvidia BlueField DPU technology. The University of Cambridge will expand... Read more…

The Role and Potential of CPUs in Deep Learning

April 14, 2021

Deep learning (DL) applications have unique architectural characteristics and efficiency requirements. Hence, the choice of computing system has a profound impa Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

Nvidia Aims Clara Healthcare at Drug Discovery, Imaging via DGX

April 12, 2021

Nvidia Corp. continues to expand its Clara healthcare platform with the addition of computational drug discovery and medical imaging tools based on its DGX A100 platform, related InfiniBand networking and its AGX developer kit. The Clara partnerships announced during... Read more…

Nvidia Serves Up Its First Arm Datacenter CPU ‘Grace’ During Kitchen Keynote

April 12, 2021

Today at Nvidia’s annual spring GPU Technology Conference (GTC), held virtually once more due to the pandemic, the company unveiled its first ever Arm-based CPU, called Grace in honor of the famous American programmer Grace Hopper. The announcement of the new... Read more…

Nvidia Debuts BlueField-3 – Its Next DPU with Big Plans for an Expanded Role

April 12, 2021

Nvidia today announced its next generation data processing unit (DPU) – BlueField-3 – adding more substance to its evolving concept of the DPU as a full-fledged partner to CPUs and GPUs in delivering advanced computing. Nvidia is pitching the DPU as an active engine... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2021) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective Read more…

Microsoft, HPE Bringing AI, Edge, Cloud to Earth Orbit in Preparation for Mars Missions

February 12, 2021

The International Space Station will soon get a delivery of powerful AI, edge and cloud computing tools from HPE and Microsoft Azure to expand technology experi Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire