Scientists to Tap Exascale Computing to Unlock the Mystery of our Accelerating Universe

By Rob Johnson

August 14, 2019

The universe and everything in it roared to life with the Big Bang approximately 13.8 billion years ago. It has continued expanding ever since. While we have a good understanding of the early universe, its fate billions of years into the future poses an equally puzzling question. Would gravity eventually collapse everything back together in the Big Crunch, or would our cosmological balloon continue expanding forever? The actual situation, as it turns out, is quite unexpected. In 1998, scientists determined the universe’s rate of expansion is accelerating. This Nobel Prize-winning discovery answered a big question, but catalyzed an even bigger one – how can this be happening?

Fascinated by this mystery, Dr. Katrin Heitmann, Physicist and Computational Scientist in the High Energy Physics Division at the U.S. Department of Energy’s (DOE) Argonne National Laboratory, dedicates her career to understanding the mechanisms propelling the unexpected behavior of the cosmos. Her efforts in the field of computational cosmology also extend to her work as Computing Coordinator for the Large Synoptic Survey Telescope Dark Energy Science Collaboration.

Dr. Katrin Heitmann, Physicist and Computational Scientist in the High Energy Physics Division at the U.S. Department of Energy’s Argonne National Laboratory

Like many mysteries facing modern researchers, the questions Dr. Heitmann seeks to answer are perplexing. “The universe we can observe with traditional scientific methods represents about five percent of its total composition. It’s a bit unsettling to think we do not have a clear understanding of the dark matter and dark energy comprising ninety-five percent of the cosmos – what we often refer to as the Dark Universe,” said Dr. Heitmann. “Understanding the nature of these elusive cosmological building blocks means that we must develop comprehensive mathematical models, using available data, to simulate the structures in the universe and how they evolve.”

Cosmological simulations

Dr. Heitmann’s models and simulations run atop the specialized Hardware/Hybrid Accelerated Cosmology Code (HACC) developed in partnership with her colleagues at Argonne. HACC is the only cosmology code suite designed from the ground up for enormous-scale simulations regardless of a supercomputing system’s architecture. Dr. Heitmann, with the HACC team, also maintains responsibility for CosmoTools, the tool set involved with HACC’s analysis library.

“Naturally, the success of our work is dependent on obtaining the best data from which to create our mathematical models for dark energy’s impact on the expansion of the universe,” noted Dr. Heitmann. “Even some of the most advanced imaging devices, like the Hubble Space Telescope, cannot obtain enough sky area for our simulations. We need to cast the net wider. Using existing cosmological survey data obtained from satellites, telescopes, and ground-based antennae, we have access to optical data and also information extracted from other wavelengths, including gamma rays, microwaves, and radio waves.”

Combined, the mix of data creates a more holistic view of the universe, which in turn helps Dr. Heitmann and her team hone their models and simulations to mirror observations of our universe’s behavior.

This simulation of a massive structure, a so-called cluster of galaxies, was run on Theta, as part of the original ESP. The mass of the object is 5.6e14 Msun. The color shows the temperature, and white areas show the baryon density field. Image courtesy of JD Emberson and the HACC team.

Additional surveys, starting in 2022 using the National Science Foundation’s (NSF) and DOE’s Large Synoptic Survey Telescope (LSST) based in Chile, will augment existing cosmological data with more massive data sets captured at high resolution. Mapping billions of galaxies, each with billions of stars, represents a considerable undertaking. Using a mirror over 25 feet wide, LSST will capture 15 terabytes of data each night for over ten years, ultimately creating the most comprehensive survey of our universe. The resulting information will make essential contributions to Dr. Heitmann’s work, offering the nuanced details needed to understand the nature of the “Dark Universe,” and hone a mathematical model to emulate it.

Embracing HPC

Utilizing the massive data sets, which describe our ever-growing universe, necessitates the speed and scale of the world’s most powerful high-performance computing (HPC) systems.

“Many years ago, the desire to understand the nature of the Dark Universe captured my curiosity. In 2000, I joined the team at Los Alamos National Laboratory and had the opportunity to use the Roadrunner HPC system for cosmological research in 2008. At that time, Roadrunner represented cutting edge performance for demanding computational tasks. As our simulation and data requirements increased in size, though, that system had trouble keeping up with us,” she said.“More recently, Argonne National Laboratory’s Mira and Theta systems advanced work in important ways. Today, we use the Summit system at Oak Ridge National Laboratory. Summit offers remarkable performance in the 200 petaflop range, but our future simulations will benefit from even greater speed than the fastest supercomputers existing today.”

Moving to exascale computing with Aurora

In 2021, Aurora, one of the first exascale[*] computing systems in the United States, will arrive at Argonne National Laboratory. Based on the Cray Shasta architecture, with the underlying support of future Intel Xeon Scalable processors, a new Xe GPU architecture which will serve as an acceleration companion to the Xeon processors, and over 10 petabytes of memory, Aurora’s performance will exceed an exaflop which equates to a billion-billion calculations per second.

In preparation for Aurora’s deployment, the Argonne Leadership Computing Facility’s (ALCF) Early Science Program (ESP) awarded pre-production time on the system to researchers crossing a diverse array of scientific disciplines including health sciences, energy, chemistry, particle physics, and cosmology. The ESP project teams, including Dr. Heitmann’s, will be among the first researchers in the world to use an exascale system. In the process, they will pave the way for other scientific applications to run on Aurora.

“In the past, our simulations were done in smaller volumes. Simulations performed on Mira seemed fast at the time, but performing more extensive cosmological simulations faced practical limits due to computing power and memory. The Summit system can accomplish in one day the type of computations which took Mira several days. However, Aurora will exceed Summit’s speed by a factor of five. That level of performance will give us the ability to use more resolving models to achieve greater insights, at higher resolution, and in a much shorter timeframe,” noted Dr. Heitmann. “We’re very excited that our team will have early access to Aurora’s extreme-scale performance and run our simulations at a truly universal scale.”

Discoveries ahead

Over the next decade, especially with the forthcoming Aurora supercomputer, Dr. Heitmann anticipates critical discoveries in the field of computational cosmology she helps pioneer. “Our big hope for our research is obtaining is a deeper understanding of the Dark Universe – something we know little about today,” she said. “In ten years, we hope to have a deeper knowledge of the ninety-five percent of our universe we cannot observe directly. With new data, optimized models, and detailed simulations which reflect our direct observations of the universe’s growth, we will have a much better comprehension of how all the components of our cosmos fit together. I deeply enjoy what I do, and it’s very fulfilling to contribute to an understanding of – quite literally – the big picture.”

[*]  Editor’s note: Neither Intel nor the DOE has indicated publicly whether Aurora is expected to reach 1 exaflops Linpack performance, which we consider the minimum threshold for “exascale computing.”

About the Author

Rob Johnson spent much of his professional career consulting for a Fortune 25 technology company. Currently, Rob owns Fine Tuning, LLC, a strategic marketing and communications consulting company based in Portland, Oregon. As a technology, audio, and gadget enthusiast his entire life, Rob also writes for TONEAudio Magazine, reviewing high-end home audio equipment.

Feature image caption: The Helix Nebula is a large planetary nebula located in the constellation Aquarius. Source: NASA photo with artistic rendering (via Shutterstock)

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Stampede2 ‘Shocks’ with New Shock Turbulence Insights

August 19, 2019

Shockwaves play roles in everything from high-speed aircraft to supernovae – and now, supercomputer-powered research from the Texas A&M University and the Texas Advanced Computing Center (TACC) is helping to shed l Read more…

By Oliver Peckham

Nanosheet Transistors: The Last Step in Moore’s Law?

August 19, 2019

Forget for a moment the clamor around the decline of Moore’s Law. It's had a brilliant run, something to be marveled at given it’s not a law at all. Squeezing out the last bit of performance that roughly corresponds Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip using standard CMOS fabrication. At Hot Chips 31 in Stanfor Read more…

By Tiffany Trader

AWS Solution Channel

Efficiency and Cost-Optimization for HPC Workloads – AWS Batch and Amazon EC2 Spot Instances

High Performance Computing on AWS leverages the power of cloud computing and the extreme scale it offers to achieve optimal HPC price/performance. With AWS you can right size your services to meet exactly the capacity requirements you need without having to overprovision or compromise capacity. Read more…

HPE Extreme Performance Solutions

Bring the combined power of HPC and AI to your business transformation

FPGA (Field Programmable Gate Array) acceleration cards are not new, as they’ve been commercially available since 1984. Typically, the emphasis around FPGAs has centered on the fact that they’re programmable accelerators, and that they can truly offer workload specific hardware acceleration solutions without requiring custom silicon. Read more…

IBM Accelerated Insights

Keys to Attracting the Newest HPC Talent – Post-Millennials

[Connect with HPC users and learn new skills in the IBM Spectrum LSF User Community.]

For engineers and scientists growing up in the 80s, the current state of HPC makes perfect sense. Read more…

Talk to Me: Nvidia Claims NLP Inference, Training Records

August 15, 2019

Nvidia says it’s achieved significant advances in conversation natural language processing (NLP) training and inference, enabling more complex, immediate-response interchanges between customers and chatbots. And the co Read more…

By Doug Black

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Scientists to Tap Exascale Computing to Unlock the Mystery of our Accelerating Universe

August 14, 2019

The universe and everything in it roared to life with the Big Bang approximately 13.8 billion years ago. It has continued expanding ever since. While we have a Read more…

By Rob Johnson

AI is the Next Exascale – Rick Stevens on What that Means and Why It’s Important

August 13, 2019

Twelve years ago the Department of Energy (DOE) was just beginning to explore what an exascale computing program might look like and what it might accomplish. Today, DOE is repeating that process for AI, once again starting with science community town halls to gather input and stimulate conversation. The town hall program... Read more…

By Tiffany Trader and John Russell

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

Lenovo Drives Single-Socket Servers with AMD Epyc Rome CPUs

August 7, 2019

No summer doldrums here. As part of the AMD Epyc Rome launch event in San Francisco today, Lenovo announced two new single-socket servers, the ThinkSystem SR635 Read more…

By Doug Black

Building Diversity and Broader Engagement in the HPC Community

August 7, 2019

Increasing diversity and inclusion in HPC is a community-building effort. Representation of both issues and individuals matters - the more people see HPC in a w Read more…

By AJ Lauer

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

High Performance (Potato) Chips

May 5, 2006

In this article, we focus on how Procter & Gamble is using high performance computing to create some common, everyday supermarket products. Tom Lange, a 27-year veteran of the company, tells us how P&G models products, processes and production systems for the betterment of consumer package goods. Read more…

By Michael Feldman

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Cray, AMD to Extend DOE’s Exascale Frontier

May 7, 2019

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Read more…

By Tiffany Trader

Graphene Surprises Again, This Time for Quantum Computing

May 8, 2019

Graphene is fascinating stuff with promise for use in a seeming endless number of applications. This month researchers from the University of Vienna and Institu Read more…

By John Russell

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures

June 17, 2019

As the Top500 list was being announced at ISC in Frankfurt today with an upgraded petascale Arm supercomputer in the top third of the list, Nvidia announced its Read more…

By Tiffany Trader

Top500 Purely Petaflops; US Maintains Performance Lead

June 17, 2019

With the kick-off of the International Supercomputing Conference (ISC) in Frankfurt this morning, the 53rd Top500 list made its debut, and this one's for petafl Read more…

By Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

A Behind-the-Scenes Look at the Hardware That Powered the Black Hole Image

June 24, 2019

Two months ago, the first-ever image of a black hole took the internet by storm. A team of scientists took years to produce and verify the striking image – an Read more…

By Oliver Peckham

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

May 22, 2019

More so than with most acquisitions of this kind, HPE’s purchase of Cray for $1.3 billion, announced last week, seems to have elements of that overused, often Read more…

By Doug Black and Tiffany Trader

Chinese Company Sugon Placed on US ‘Entity List’ After Strong Showing at International Supercomputing Conference

June 26, 2019

After more than a decade of advancing its supercomputing prowess, operating the world’s most powerful supercomputer from June 2013 to June 2018, China is keep Read more…

By Tiffany Trader

In Wake of Nvidia-Mellanox: Xilinx to Acquire Solarflare

April 25, 2019

With echoes of Nvidia’s recent acquisition of Mellanox, FPGA maker Xilinx has announced a definitive agreement to acquire Solarflare Communications, provider Read more…

By Doug Black

Qualcomm Invests in RISC-V Startup SiFive

June 7, 2019

Investors are zeroing in on the open standard RISC-V instruction set architecture and the processor intellectual property being developed by a batch of high-flying chip startups. Last fall, Esperanto Technologies announced a $58 million funding round. Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This