Scientists to Tap Exascale Computing to Unlock the Mystery of our Accelerating Universe

By Rob Johnson

August 14, 2019

The universe and everything in it roared to life with the Big Bang approximately 13.8 billion years ago. It has continued expanding ever since. While we have a good understanding of the early universe, its fate billions of years into the future poses an equally puzzling question. Would gravity eventually collapse everything back together in the Big Crunch, or would our cosmological balloon continue expanding forever? The actual situation, as it turns out, is quite unexpected. In 1998, scientists determined the universe’s rate of expansion is accelerating. This Nobel Prize-winning discovery answered a big question, but catalyzed an even bigger one – how can this be happening?

Fascinated by this mystery, Dr. Katrin Heitmann, Physicist and Computational Scientist in the High Energy Physics Division at the U.S. Department of Energy’s (DOE) Argonne National Laboratory, dedicates her career to understanding the mechanisms propelling the unexpected behavior of the cosmos. Her efforts in the field of computational cosmology also extend to her work as Computing Coordinator for the Large Synoptic Survey Telescope Dark Energy Science Collaboration.

Dr. Katrin Heitmann, Physicist and Computational Scientist in the High Energy Physics Division at the U.S. Department of Energy’s Argonne National Laboratory

Like many mysteries facing modern researchers, the questions Dr. Heitmann seeks to answer are perplexing. “The universe we can observe with traditional scientific methods represents about five percent of its total composition. It’s a bit unsettling to think we do not have a clear understanding of the dark matter and dark energy comprising ninety-five percent of the cosmos – what we often refer to as the Dark Universe,” said Dr. Heitmann. “Understanding the nature of these elusive cosmological building blocks means that we must develop comprehensive mathematical models, using available data, to simulate the structures in the universe and how they evolve.”

Cosmological simulations

Dr. Heitmann’s models and simulations run atop the specialized Hardware/Hybrid Accelerated Cosmology Code (HACC) developed in partnership with her colleagues at Argonne. HACC is the only cosmology code suite designed from the ground up for enormous-scale simulations regardless of a supercomputing system’s architecture. Dr. Heitmann, with the HACC team, also maintains responsibility for CosmoTools, the tool set involved with HACC’s analysis library.

“Naturally, the success of our work is dependent on obtaining the best data from which to create our mathematical models for dark energy’s impact on the expansion of the universe,” noted Dr. Heitmann. “Even some of the most advanced imaging devices, like the Hubble Space Telescope, cannot obtain enough sky area for our simulations. We need to cast the net wider. Using existing cosmological survey data obtained from satellites, telescopes, and ground-based antennae, we have access to optical data and also information extracted from other wavelengths, including gamma rays, microwaves, and radio waves.”

Combined, the mix of data creates a more holistic view of the universe, which in turn helps Dr. Heitmann and her team hone their models and simulations to mirror observations of our universe’s behavior.

This simulation of a massive structure, a so-called cluster of galaxies, was run on Theta, as part of the original ESP. The mass of the object is 5.6e14 Msun. The color shows the temperature, and white areas show the baryon density field. Image courtesy of JD Emberson and the HACC team.

Additional surveys, starting in 2022 using the National Science Foundation’s (NSF) and DOE’s Large Synoptic Survey Telescope (LSST) based in Chile, will augment existing cosmological data with more massive data sets captured at high resolution. Mapping billions of galaxies, each with billions of stars, represents a considerable undertaking. Using a mirror over 25 feet wide, LSST will capture 15 terabytes of data each night for over ten years, ultimately creating the most comprehensive survey of our universe. The resulting information will make essential contributions to Dr. Heitmann’s work, offering the nuanced details needed to understand the nature of the “Dark Universe,” and hone a mathematical model to emulate it.

Embracing HPC

Utilizing the massive data sets, which describe our ever-growing universe, necessitates the speed and scale of the world’s most powerful high-performance computing (HPC) systems.

“Many years ago, the desire to understand the nature of the Dark Universe captured my curiosity. In 2000, I joined the team at Los Alamos National Laboratory and had the opportunity to use the Roadrunner HPC system for cosmological research in 2008. At that time, Roadrunner represented cutting edge performance for demanding computational tasks. As our simulation and data requirements increased in size, though, that system had trouble keeping up with us,” she said.“More recently, Argonne National Laboratory’s Mira and Theta systems advanced work in important ways. Today, we use the Summit system at Oak Ridge National Laboratory. Summit offers remarkable performance in the 200 petaflop range, but our future simulations will benefit from even greater speed than the fastest supercomputers existing today.”

Moving to exascale computing with Aurora

In 2021, Aurora, one of the first exascale[*] computing systems in the United States, will arrive at Argonne National Laboratory. Based on the Cray Shasta architecture, with the underlying support of future Intel Xeon Scalable processors, a new Xe GPU architecture which will serve as an acceleration companion to the Xeon processors, and over 10 petabytes of memory, Aurora’s performance will exceed an exaflop which equates to a billion-billion calculations per second.

In preparation for Aurora’s deployment, the Argonne Leadership Computing Facility’s (ALCF) Early Science Program (ESP) awarded pre-production time on the system to researchers crossing a diverse array of scientific disciplines including health sciences, energy, chemistry, particle physics, and cosmology. The ESP project teams, including Dr. Heitmann’s, will be among the first researchers in the world to use an exascale system. In the process, they will pave the way for other scientific applications to run on Aurora.

“In the past, our simulations were done in smaller volumes. Simulations performed on Mira seemed fast at the time, but performing more extensive cosmological simulations faced practical limits due to computing power and memory. The Summit system can accomplish in one day the type of computations which took Mira several days. However, Aurora will exceed Summit’s speed by a factor of five. That level of performance will give us the ability to use more resolving models to achieve greater insights, at higher resolution, and in a much shorter timeframe,” noted Dr. Heitmann. “We’re very excited that our team will have early access to Aurora’s extreme-scale performance and run our simulations at a truly universal scale.”

Discoveries ahead

Over the next decade, especially with the forthcoming Aurora supercomputer, Dr. Heitmann anticipates critical discoveries in the field of computational cosmology she helps pioneer. “Our big hope for our research is obtaining is a deeper understanding of the Dark Universe – something we know little about today,” she said. “In ten years, we hope to have a deeper knowledge of the ninety-five percent of our universe we cannot observe directly. With new data, optimized models, and detailed simulations which reflect our direct observations of the universe’s growth, we will have a much better comprehension of how all the components of our cosmos fit together. I deeply enjoy what I do, and it’s very fulfilling to contribute to an understanding of – quite literally – the big picture.”

[*]  Editor’s note: Neither Intel nor the DOE has indicated publicly whether Aurora is expected to reach 1 exaflops Linpack performance, which we consider the minimum threshold for “exascale computing.”

About the Author

Rob Johnson spent much of his professional career consulting for a Fortune 25 technology company. Currently, Rob owns Fine Tuning, LLC, a strategic marketing and communications consulting company based in Portland, Oregon. As a technology, audio, and gadget enthusiast his entire life, Rob also writes for TONEAudio Magazine, reviewing high-end home audio equipment.

Feature image caption: The Helix Nebula is a large planetary nebula located in the constellation Aquarius. Source: NASA photo with artistic rendering (via Shutterstock)

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Global QC Market Projected to Grow to More Than $800 million by 2024

September 28, 2020

The Quantum Economic Development Consortium (QED-C) and Hyperion Research are projecting that the global quantum computing (QC) market - worth an estimated $320 million in 2020 - will grow at an anticipated 27% CAGR betw Read more…

By Staff Reports

DoE’s ASCAC Backs AI for Science Program that Emulates the Exascale Initiative

September 28, 2020

Roughly a year after beginning formal efforts to explore an AI for Science initiative the Department of Energy’s Advanced Scientific Computing Advisory Committee last week accepted a subcommittee report calling for a t Read more…

By John Russell

Supercomputer Research Aims to Supercharge COVID-19 Antiviral Remdesivir

September 25, 2020

Remdesivir is one of a handful of therapeutic antiviral drugs that have been proven to improve outcomes for COVID-19 patients, and as such, is a crucial weapon in the fight against the pandemic – especially in the abse Read more…

By Oliver Peckham

NOAA Announces Major Upgrade to Ensemble Forecast Model, Extends Range to 35 Days

September 23, 2020

A bit over a year ago, the United States’ Global Forecast System (GFS) received a major upgrade: a new dynamical core – its first in 40 years – called the finite-volume cubed-sphere, or FV3. Now, the National Oceanic and Atmospheric Administration (NOAA) is bringing the FV3 dynamical core to... Read more…

By Oliver Peckham

AI Silicon Startup Graphcore Launches Channel Partner Program

September 23, 2020

AI compute platform vendor Graphcore has launched its first formal global channel partner program to promote and boost the sales of its AI processors and blade computing products. The formalized, all-new Graphcore Elite Partner Program follows the company’s past history of working with several... Read more…

By Todd R. Weiss

AWS Solution Channel

The Water Institute of the Gulf runs compute-heavy storm surge and wave simulations on AWS

The Water Institute of the Gulf (Water Institute) runs its storm surge and wave analysis models on Amazon Web Services (AWS)—a task that sometimes requires large bursts of compute power. Read more…

Intel® HPC + AI Pavilion

Berlin Institute of Health: Putting HPC to Work for the World

Researchers from the Center for Digital Health at the Berlin Institute of Health (BIH) are using science to understand the pathophysiology of COVID-19, which can help to inform the development of targeted treatments. Read more…

Arm Targets HPC with New Neoverse Platforms

September 22, 2020

UK-based semiconductor design company Arm today teased details of its Neoverse roadmap, introducing V1 (codenamed Zeus) and N2 (codenamed Perseus), Arm’s second generation N-series platform. The chip IP vendor said the new platforms will deliver 50 percent and 40 percent more... Read more…

By Tiffany Trader

DoE’s ASCAC Backs AI for Science Program that Emulates the Exascale Initiative

September 28, 2020

Roughly a year after beginning formal efforts to explore an AI for Science initiative the Department of Energy’s Advanced Scientific Computing Advisory Commit Read more…

By John Russell

NOAA Announces Major Upgrade to Ensemble Forecast Model, Extends Range to 35 Days

September 23, 2020

A bit over a year ago, the United States’ Global Forecast System (GFS) received a major upgrade: a new dynamical core – its first in 40 years – called the finite-volume cubed-sphere, or FV3. Now, the National Oceanic and Atmospheric Administration (NOAA) is bringing the FV3 dynamical core to... Read more…

By Oliver Peckham

Arm Targets HPC with New Neoverse Platforms

September 22, 2020

UK-based semiconductor design company Arm today teased details of its Neoverse roadmap, introducing V1 (codenamed Zeus) and N2 (codenamed Perseus), Arm’s second generation N-series platform. The chip IP vendor said the new platforms will deliver 50 percent and 40 percent more... Read more…

By Tiffany Trader

Oracle Cloud Deepens HPC Embrace with Launch of A100 Instances, Plans for Arm, More 

September 22, 2020

Oracle Cloud Infrastructure (OCI) continued its steady ramp-up of HPC capabilities today with a flurry of announcements. Topping the list is general availabilit Read more…

By John Russell

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

Future of Fintech on Display at HPC + AI Wall Street

September 17, 2020

Those who tuned in for Tuesday's HPC + AI Wall Street event got a peak at the future of fintech and lively discussion of topics like blockchain, AI for risk man Read more…

By Alex Woodie, Tiffany Trader and Todd R. Weiss

IBM’s Quantum Race to One Million Qubits

September 15, 2020

IBM today outlined its ambitious quantum computing technology roadmap at its virtual Quantum Summit. The eye-popping million qubit number is still far out, agrees IBM, but perhaps not that far out. Just as eye-popping is IBM’s nearer-term plan for a 1,000-plus qubit system named Condor... Read more…

By John Russell

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Leading Solution Providers

Contributors

Oracle Cloud Infrastructure Powers Fugaku’s Storage, Scores IO500 Win

August 28, 2020

In June, RIKEN shook the supercomputing world with its Arm-based, Fujitsu-built juggernaut: Fugaku. The system, which weighs in at 415.5 Linpack petaflops, topp Read more…

By Oliver Peckham

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

DOD Orders Two AI-Focused Supercomputers from Liqid

August 24, 2020

The U.S. Department of Defense is making a big investment in data analytics and AI computing with the procurement of two HPC systems that will provide the High Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

Microsoft Azure Adds A100 GPU Instances for ‘Supercomputer-Class AI’ in the Cloud

August 19, 2020

Microsoft Azure continues to infuse its cloud platform with HPC- and AI-directed technologies. Today the cloud services purveyor announced a new virtual machine Read more…

By Tiffany Trader

Japan’s Fugaku Tops Global Supercomputing Rankings

June 22, 2020

A new Top500 champ was unveiled today. Supercomputer Fugaku, the pride of Japan and the namesake of Mount Fuji, vaulted to the top of the 55th edition of the To Read more…

By Tiffany Trader

Joliot-Curie Supercomputer Used to Build First Full, High-Fidelity Aircraft Engine Simulation

July 14, 2020

When industrial designers plan the design of a new element of a vehicle’s propulsion or exterior, they typically use fluid dynamics to optimize airflow and in Read more…

By Oliver Peckham

Intel Speeds NAMD by 1.8x: Saves Xeon Processor Users Millions of Compute Hours

August 12, 2020

Potentially saving datacenters millions of CPU node hours, Intel and the University of Illinois at Urbana–Champaign (UIUC) have collaborated to develop AVX-512 optimizations for the NAMD scalable molecular dynamics code. These optimizations will be incorporated into release 2.15 with patches available for earlier versions. Read more…

By Rob Farber

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This