Exascale Computing Project Announces $48 Million to Establish Exascale Co-Design Centers

November 11, 2016

OAK RIDGE, Tenn., Nov. 11 — The Department of Energy’s Exascale Computing Project (ECP) today announced that it has selected four co-design centers as part of a 4 year, $48 million funding award. The first year is funded at $12 million, and is to be allocated evenly among the four award recipients.

The ECP is responsible for the planning, execution, and delivery of technologies necessary for a capable exascale ecosystem to support the nation’s exascale imperative including software, applications, hardware, and early testbed platforms.

According to Doug Kothe, ECP Director of Application Development, “Co-design lies at the heart of the Exascale Computing Project. ECP co-design, an intimate interchange of the best that hardware technologies, software technologies, and applications have to offer each other, will be a catalyst for delivery of exascale-enabling science and engineering solutions for the U.S.”  Kothe continued, “By targeting common patterns of computation and communication, known as “application motifs”, we are confident that these ECP co-design centers will knock down key performance barriers and pave the way for applications to exploit all that capable exascale has to offer.”

Exascale refers to computing systems at least 50 times faster than the nation’s most powerful supercomputers in use today.

The development of capable exascale systems requires an interdisciplinary engineering approach in which the developers of the software ecosystem, the hardware technology, and a new generation of computational science applications are collaboratively involved in a participatory design process referred to as co-design. The co-design process is paramount to ensuring that future exascale applications adequately reflect the complex interactions and trade-offs associated with the many new and sometimes conflicting design options, enabling these applications to tackle problems they currently can’t address.

According to ECP Director Paul Messina, “The establishment of these and future co‑design centers is foundational to the creation of an integrated, usable, and useful exascale ecosystem. After a lengthy review, we are pleased to announce that we have initially selected four proposals for funding. The establishment of these co-design centers, following on the heels of our recent application development awards, signals the momentum and direction of ECP as we bring together the necessary ecosystem and infrastructure to drive the nation’s exascale imperative.”

The four selected co-design proposals and their principal investigators are as follows:

CODAR: Co-Design Center for Online Data Analysis and Reduction at the Exascale. Principal Investigator: Ian Foster, Argonne National Laboratory Distinguished Fellow.

This co-design center will focus on overcoming the rapidly growing gap between compute speed and storage I/O rates by evaluating, deploying, and integrating novel online data analysis and reduction methods for the exascale. Working closely with Exascale Computing Program (ECP) applications, CODAR will undertake a focused co-design process that targets both common and domain-specific data analysis and reduction methods, with the goal of allowing application developers to choose and configure methods to output just the data needed by the application. CODAR will engage directly with providers of ECP hardware, system software, programming models, data analysis and reduction algorithms, and applications in order to better understand and guide tradeoffs in the development of exascale systems, applications, and software frameworks, given constraints relating to application development costs, application fidelity, performance portability, scalability, and power efficiency.

“Argonne is pleased to be leading CODAR efforts in support of the Exascale Computing Program,” said Argonne Distinguished Fellow Ian Foster. “We aim in CODAR to co-optimize applications, data services, and exascale platforms to deliver the right bits in the right place at the right time.”

Block-Structured AMR Co-Design Center. Principal Investigator: John Bell, Lawrence Berkeley National Laboratory.

The Block-Structured Adaptive Mesh Refinement Co-Design Center will be led by Lawrence Berkeley National Laboratory with support from Argonne National Laboratory and the National Renewable Energy Laboratory. The goal is to develop a new framework, AMReX, to support the development of block-structured adaptive mesh refinement (AMR) algorithms for solving systems of partial differential equations (PDE’s) with complex boundary conditions on exascale architectures.  Block-structured AMR provides a natural framework in which to focus computing power on the most critical parts of the problem in the most computationally efficient way possible. Block-structured AMR is already widely used to solve many problems relevant to DOE. Specifically, at least six of the 22 exascale application projects announced last month—in the areas of accelerators, astrophysics, combustion, cosmology, multiphase flow and subsurface flow—will rely on block-structured AMR as part of the ECP.

“This co-design center reflects the important role of adaptive mesh refinement in accurately simulating problems at scales ranging from the edges of flames to global climate to the makeup of the universe and how AMR will be critical to tackling problems at the exascale,” said David Brown, Director of Berkeley Lab’s Computational Research Division. “It’s also important to note that AMR will be a critical component in one third of the 22 exascale application projects announced in September, which will help ensure that researchers can make productive use of exascale systems when they are deployed.”

Center for Efficient Exascale Discretizations (CEED). Principal Investigator: Tzanio Kolev, Lawrence Livermore National Laboratory.

Fully exploiting future exascale architectures will require a rethinking of the algorithms used in the large scale applications that advance many science areas vital to DOE and NNSA, such as global climate modeling, turbulent combustion in internal combustion engines, nuclear reactor modeling, additive manufacturing, subsurface flow, and national security applications. The newly established Center for Efficient Exascale Discretizations (CEED) in DOE’s Exascale Computing Project (ECP) aims to help these DOE/NNSA applications to take full advantage of exascale hardware by using state-of-the-art ‘high-order discretizations’ that provide an order of magnitude performance improvement over traditional methods.

In simple mathematical terms, discretization denotes the process of dividing a geometry into finite elements, or building blocks, in preparation for analysis. This process, which can dramatically improve application performance, involves making simplifying assumptions to reduce demands on the computer, but with minimal loss of accuracy. Recent developments in supercomputing make it increasingly clear that the high-order discretizations, which CEED is focused on, have the potential to achieve optimal performance and deliver fast, efficient and accurate simulations on exascale systems.

The CEED Co-Design Center is a research partnership of two DOE labs and five universities. Partners include Lawrence Livermore National Laboratory; Argonne National Laboratory; the University of Illinois Urbana-Champaign; Virginia Tech; University of Tennessee, Knoxville; Colorado University, Boulder; and the Rensselaer Polytechnic Institute (RPI).

“The CEED team I have the privilege to lead is dedicated to the development of next-generation discretization software and algorithms that will enable a wide range of applications to run efficiently on future hardware,” said CEED director Tzanio Kolev of Lawrence Livermore National Laboratory.  “Our co-design center is focused first and foremost on applications. We bring to this enterprise a collaborative team of application scientists, computational mathematicians and computer scientists with a strong track record of delivering performant software on leading edge platforms. Collectively, we support hundreds of users in national labs, industry and academia and we are committed to pushing simulation capabilities to new levels across an ever-widening range of applications.”

Co-design center for Particle Applications (CoPA). Principal Investigator: Tim Germann, Los Alamos National Laboratory.

This co-design center will serve as a centralized clearinghouse for particle-based ECP applications, communicating their requirements and evaluating potential uses and benefits of ECP hardware and software technologies using proxy applications. Particle-based simulation approaches are ubiquitous in computational science and engineering, and they involve the interaction of each particle with its environment by direct particle-particle interactions at shorter ranges and/or by particle-mesh interactions with a local field that is set up by longer-range effects. Best practices in code portability, data layout and movement, and performance optimization will be developed and disseminated via sustainable, productive and interoperable co-designed numerical recipes for particle-based methods that meet the application requirements within the design space of software technologies and subject to exascale hardware constraints. The ultimate goal is the creation of scalable open exascale software platforms suitable for use by a variety of particle-based simulations.

“Los Alamos is delighted to be leading the Co-Design Center for Particle-Based Methods: From Quantum to Classical, Molecular to Cosmological, which builds on the success of ExMatEx, the Exascale CoDesign Center for Materials in Extreme Environments,” said John Sarrao, Associate Director for Theory, Simulation, and Computation at Los Alamos.  “Advancing deterministic particle-based methods is essential for simulations at the exascale, and Los Alamos has long believed that co-design is the right approach for advancing these frontiers. We look forward to partnering with our colleague Laboratories in successfully executing this important element of the Exascale Computing Project.”

About ECP

The ECP is a collaborative effort of two DOE organizations—the Office of Science and the National Nuclear Security Administration.  As part of President Obama’s National Strategic Computing initiative, ECP was established to develop a capable exascale ecosystem, encompassing applications, system software, hardware technologies and architectures, and workforce development to meet the scientific and national security mission needs of DOE in the mid-2020s time frame.

About the Office of Science

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit http://science.energy.gov/.

About NNSA

Established by Congress in 2000, NNSA is a semi-autonomous agency within the U.S. Department of Energy responsible for enhancing national security through the military application of nuclear science. NNSA maintains and enhances the safety, security, and effectiveness of the U.S. nuclear weapons stockpile without nuclear explosive testing; works to reduce the global danger from weapons of mass destruction; provides the U.S. Navy with safe and effective nuclear propulsion; and responds to nuclear and radiological emergencies in the U.S. and abroad. https://nnsa.energy.gov


Source: The Exascale Computing Project

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Intel’s Optane/DAOS Solution Tops Latest IO500

August 11, 2020

Intel’s persistent memory technology, Optane, and its DAOS (Distributed Asynchronous Object Storage) stack continue to impress and gain market traction. Yesterday, Intel reported an Optane and DAOS-based system finishe Read more…

By John Russell

Summit Now Offers Virtual Tours

August 10, 2020

Summit, the second most powerful publicly ranked supercomputer in the world, now has a virtual tour. The tour, implemented by 3D platform Matterport, allows users to virtually “walk” around the massive supercomputer Read more…

By Oliver Peckham

Supercomputer Simulations Examine Changes in Chesapeake Bay

August 8, 2020

The Chesapeake Bay, the largest estuary in the continental United States, weaves its way south from Maryland, collecting waters from West Virginia, Delaware, DC, Pennsylvania and New York along the way. Like many major e Read more…

By Oliver Peckham

Student Success from ‘Scratch’: CHPC’s Proof is in the Pudding

August 7, 2020

Happy Sithole, who directs the South African Centre for High Performance Computing (SA-CHPC), called the 13th annual CHPC National conference to order on December 1, 2019, at the Birchwood Conference Centre in Kempton Pa Read more…

By Elizabeth Leake

New GE Simulations on Summit to Advance Offshore Wind Power

August 6, 2020

The wind energy sector is a frequent user of high-power simulations, with researchers aiming to optimize wind flows and energy production from the massive turbines. Now, researchers at GE are preparing to undertake a lar Read more…

By Oliver Peckham

AWS Solution Channel

AWS announces the release of AWS ParallelCluster 2.8.0

AWS ParallelCluster is a fully supported and maintained open source cluster management tool that makes it easy for scientists, researchers, and IT administrators to deploy and manage High Performance Computing (HPC) clusters in the AWS cloud. Read more…

Intel® HPC + AI Pavilion

Supercomputing the Pandemic: Scientific Community Tackles COVID-19 from Multiple Perspectives

Since their inception, supercomputers have taken on the biggest, most complex, and most data-intensive computing challenges—from confirming Einstein’s theories about gravitational waves to predicting the impacts of climate change. Read more…

Research: A Survey of Numerical Methods Utilizing Mixed Precision Arithmetic

August 5, 2020

Within the past years, hardware vendors have started designing low precision special function units in response to the demand of the machine learning community and their demand for high compute power in low precision for Read more…

By Hartwig Anzt and Jack Dongarra

Intel’s Optane/DAOS Solution Tops Latest IO500

August 11, 2020

Intel’s persistent memory technology, Optane, and its DAOS (Distributed Asynchronous Object Storage) stack continue to impress and gain market traction. Yeste Read more…

By John Russell

Summit Now Offers Virtual Tours

August 10, 2020

Summit, the second most powerful publicly ranked supercomputer in the world, now has a virtual tour. The tour, implemented by 3D platform Matterport, allows use Read more…

By Oliver Peckham

Research: A Survey of Numerical Methods Utilizing Mixed Precision Arithmetic

August 5, 2020

Within the past years, hardware vendors have started designing low precision special function units in response to the demand of the machine learning community Read more…

By Hartwig Anzt and Jack Dongarra

Implement Photonic Tensor Cores for Machine Learning?

August 5, 2020

Researchers from George Washington University have reported an approach for building photonic tensor cores that leverages phase change photonic memory to implem Read more…

By John Russell

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

Machines, Connections, Data, and Especially People: OAC Acting Director Amy Friedlander Charts Office’s Blueprint for Innovation

August 3, 2020

The path to innovation in cyberinfrastructure (CI) will require continued focus on building HPC systems and secure connections between them, in addition to the Read more…

By Ken Chiacchia, Pittsburgh Supercomputing Center/XSEDE

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

Leading Solution Providers

Contributors

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Joliot-Curie Supercomputer Used to Build First Full, High-Fidelity Aircraft Engine Simulation

July 14, 2020

When industrial designers plan the design of a new element of a vehicle’s propulsion or exterior, they typically use fluid dynamics to optimize airflow and in Read more…

By Oliver Peckham

John Martinis Reportedly Leaves Google Quantum Effort

April 21, 2020

John Martinis, who led Google’s quantum computing effort since establishing its quantum hardware group in 2014, has left Google after being moved into an advi Read more…

By John Russell

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This