European Exascale Project Drives Toward Next Supercomputing Milestone

By Nicole Hemsoth

January 6, 2011

With petascale systems now deployed on three continents, the HPC industry is already looking toward the next milestone in supercomputing: exascale computing. In Europe, this activity is centered on the European Exascale Software Initiative (EESI), a project that brings together industry and government organizations committed to helping usher the transition from petascale to exascale systems over the next decade.

To learn more about the EESI, including the organization’s activities and what took place at the recent workshop in Amsterdam, we spoke with two of the key players, including Jean-Yves Berthou, information technologies program director at EDF, and EESI program leader; and Peter Michielse, deputy director at NWO/NCF. We also got the US perspective from Jack Dongarra, who runs the Innovative Computing Laboratory at the University of Tennessee; and Pete Beckman, director of the Argonne Leadership Computing Facility.

HPCwire: Can you describe the EESI project for those who don’t know what it is?

Jean-Yves Berthou: The European Exascale Software Initiative goal is to build a European vision and roadmap to address the challenge of performing scientific computing on the new generation of computers composed of millions of heterogeneous cores which will provide multi-petaflop performance in 2010 and exaflop performances in 2020. These hardware capabilities lead to outstanding technological breakthrough possibilities in computations and simulations, which will be reached only if an international cooperation work program is set up.

This is done through a set of conferences and work groups involving a very large number of HPC European actors, both scientific software developers and users. They will investigate where Europe stands in the overall international HPC landscape, what are its strengths and weaknesses, what are the priority actions, and what cooperation modes should be implemented between Europe and the international community. EESI will also identify the sources of competitiveness for Europe induced by the use of peta/exascale software. It will investigate and propose programs in education and training for the next generation of computational scientists.

The overall challenge must be faced at worldwide level to be attainable. EESI coordinates the European contribution to the International Exascale Software Project (IESP) launched by The Department Of Energy Office of Science and led by Jack Dongarra and Pete Beckman.

EESI is an FP7 Support Action funded by the European Commission under the call INFRA-2010-3.3: Coordination actions, conferences and studies supporting policy development, including international cooperation.

HPCwire: What are the timescales for the project?

Berthou: EESI has been launched on June 1, 2010 for an 18 months duration. A first mapping of the major HPC projects and organizations have be achieved. This mapping have been extended world-wide using IESP inputs and international contacts. This mapping is available on the EESI website.

The EESI workplan is now progressing in two directions. A first set of four working groups is targeting the technological computing domain challenges: hardware and associated software, computer science, numerical analysis and applicative software, that is, scientific and engineering codes. Each working group will produce its own roadmap by June 2011.

A second set of working groups will target the applicative side by looking for major grand challenges in climate and weather forecasting, industrial applications focusing on transportation and energy, physics and engineering sciences, and life science-health-BPM. Each working group will also produce its own roadmap integrating technological inputs identified by the first four working groups.

The economic dimension and impact on European competitiveness of these challenges will be specifically studied. To ensure close collaboration and sharing, one internal workshop will be held in February 2011, where each working group will be invited to present its results and roadmaps.

An overall synthesis will be produced and be presented at a large final public conference in Barcelona.

HPCwire: What is the funder, in this case, the European Commission, expecting to see as outputs from EESI?

Berthou: The expected outputs of the project is an exascale roadmap and set of recommendations to the funding agencies shared by the European HPC community, on software — tools, methods and applications — to be developed for this new generation of supercomputers.

HPCwire: Why does industry feel it is important to be involved in EESI?

Berthou: Exascale systems will engage the HPC community for the next 20 years in defining new generations of applications and simulation platforms. The challenge is particularly severe for multi-physics, multi-scale simulation platforms that will have to combine massively parallel software components developed independently from each others.

Another difficult issue is to deal with legacy codes, which are constantly evolving and have to stay in the forefront of their disciplines. This will require new numerical methods, code architectures, mesh generation tools, and visualization tools. In addition to the applications, all the software layers between the applications and the hardware need to be revisited for peta to exascale computers. Considering that 5 to 10 years are necessary to design, develop and validate a new generation of scientific applications, it is time now for industry to think about exaflop computing.

HPCwire: EESI recently held its first international workshop in Amsterdam. Can you tell us a little about that?

Peter Michielse: EESI held its first internal international workshop on November 9, 2010 in Amsterdam. The workshop has brought together approximately 80 — mostly European — experts in the areas of software development, performance analysis, applications knowledge, funding models and governance aspects in high performance computing.

An important part of the EESI project are the four working groups (WGs) in the area of application grand challenges and the four working groups in enabling techniques for exaflop computing. Each WG is composed of around 15 recognized experts, taking into account both expertise and geographical representation. The goal of each WG is to identify and classify the key challenges in their scientific area or technology component. This includes analysis of European strengths and weaknesses, existing collaborations, existing projects and opportunities for Europe.

During the morning session of the workshop, each WG presented itself, including the topics they view within their scope. Most WGs have been populated by experts, their first meetings have been planned, and an initial list of topics within each WG has been identified. During the discussion some aspects have been added to certain WGs.

The afternoon session started with an overview of the cartography results on HPC and exascale programs worldwide. It turns out that the DOE in the US is making progress in the areas of exascale software centers and co-design centers. Japan is developing its 10-plus petaflop K System, but along with that goes a strategic program on High Performance Computing Infrastructure (HPCI). On strategic programs in China, not so much is known. But it is a fact that developments and actual installations are taking place in petaflop systems that have put China on top of the TOP500 list. In addition, the European Commission, within FP7, has recently opened two calls with significant funding, dedicated to computing systems and exascale computing.

HPCwire: What were the main themes raised at the workshop?

Michielse: Basically, there were two important purposes for the meeting. First was to make sure that each WG was considering the right challenges within its scientific or technology field. During the presentations of the WGs, additional topics were recognized as being part of the WG, including several aspects which typically hold for more than one, or even for all WGs. These aspects include resilience, performance, power consumption and programmability of exascale software and systems.

The second purpose of the meeting was to get informed about US and Asian efforts with respect to their exascale software efforts, and as a result of that, investigate how the EESI Working Group activities align with those efforts and with the activities in IESP with respect to co-design of hardware, software and applications.

It also became clear that there are various challenges in international collaborations ahead, for instance synchronization of activities worldwide and organizational aspects to realize this, and also on how to cope with confidentiality of vendor developments and intellectual property rights.

HPCwire: What is the relationship between EESI and PRACE? And between EESI and other strategic activities in Europe, for example the recent IDC European HPC report? Are they competing or complementary?

Michielse: EESI, PRACE and other strategic initiatives in Europe are not only complementary, but should also strengthen each other. The IDC report gives its view on the opportunities for Europe with respect to future HPC, while the activities of PRACE are directed to building a pan-European infrastructure of Tier-0 HPC systems. The PRACE project not only investigates the actual infrastructure and regulations for that, but also heavily works on applications which are of high interest for European users and scientists.

An important role of EESI is to make sure that Europe is involved in global discussions on hardware, software and applications design and that Europe is involved in setting agendas and making choices for the benefit of European science, industry and economy. EESI could be viewed as the voice of the European HPC activities in a global context. Many people active in EESI are also active in PRACE and DEISA.

HPCwire: Supercomputing is often presented as a race, with nations vying for leadership to preserve industrial, economic and research competitiveness. How does the call for collaboration in exascale balance with this? Does this differ between hardware and software?

Jack Dongarra: Supercomputing capability benefits a broad range of industries, including energy, pharmaceutical, aircraft, automobile, entertainment, and others. More powerful computing capability will allow these diverse industries to more quickly engineer superior new products that could improve a nation’s competitiveness. In addition, there are considerable flow-down benefits that will result from meeting both the hardware and software high performance computing challenges. These would include enhancements to smaller computer systems and many types of consumer electronics, from smartphones to cameras.

With respect to software, it seems clear that the scope of the effort to develop software for exascale must be truly international. In terms of its rationale, scientists in nearly every field now depend upon the software infrastructure of high-end computing to open up new areas of inquiry — for example, the very small, very large, very hazardous, very complex — to dramatically increase their research productivity, and to amplify the social and economic impact of their work.

It serves global scientific communities who need to work together on problems of global significance and leverage distributed resources in transnational configurations. In terms of feasibility, the dimensions of the task — totally redesigning and recreating, in the period of just a few years, the massive software foundation of computational science in order to meet the new realities of extreme-scale computing — are simply too large for any one country, or small consortium of countries, to undertake all on its own.

Standardization is also a minimum requirement for broad international collaboration on development of software components. In addition the international nature of the science will demand further development of global data management tools and standards for shared data.

Pete Beckman: One possible comparison to this effort is the International Space Station. With such a complex endeavor that targets scientific results that can benefit everyone, it is important to bring together collaborative teams of the best scientists from around the globe. By working together we can achieve more and deliver results sooner.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Data Vortex Users Contemplate the Future of Supercomputing

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural users group at Pacific Northwest National Laboratory (PNNL) bringing together about 30 participants from industry, government and academia t Read more…

By Tiffany Trader

AI Self-Training Goes Forward at Google DeepMind

October 19, 2017

DeepMind, Google’s AI research organization, announced today in a blog that AlphaGo Zero, the latest evolution of AlphaGo (the first computer program to defeat a Go world champion) trained itself within three days to play Go at a superhuman level (i.e., better than any human) – and to beat the old version of AlphaGo – without leveraging human expertise, data or training. Read more…

By Doug Black

Researchers Scale COSMO Climate Code to 4888 GPUs on Piz Daint

October 17, 2017

Effective global climate simulation, sorely needed to anticipate and cope with global warming, has long been computationally challenging. Two of the major obstacles are the needed resolution and prolonged time to compute Read more…

By John Russell

HPE Extreme Performance Solutions

Transforming Genomic Analytics with HPC-Accelerated Insights

Advancements in the field of genomics are revolutionizing our understanding of human biology, rapidly accelerating the discovery and treatment of genetic diseases, and dramatically improving human health. Read more…

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Cluster Competition coverage has come to its natural home: H Read more…

By Dan Olds

Data Vortex Users Contemplate the Future of Supercomputing

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural users group at Pacific Northwest National Laboratory (PNNL) bringing together ab Read more…

By Tiffany Trader

AI Self-Training Goes Forward at Google DeepMind

October 19, 2017

DeepMind, Google’s AI research organization, announced today in a blog that AlphaGo Zero, the latest evolution of AlphaGo (the first computer program to defeat a Go world champion) trained itself within three days to play Go at a superhuman level (i.e., better than any human) – and to beat the old version of AlphaGo – without leveraging human expertise, data or training. Read more…

By Doug Black

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Read more…

By Dan Olds

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan Read more…

By John Russell

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Debuts Programmable Acceleration Card

October 5, 2017

With a view toward supporting complex, data-intensive applications, such as AI inference, video streaming analytics, database acceleration and genomics, Intel i Read more…

By Doug Black

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Leading Solution Providers

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Share This