Open-Source Code Nyx, Born at Berkeley Lab, Continues to Advance Cosmology Research

March 22, 2021

March 22, 2021 — Over the past decade, a coding project born out of Lawrence Berkeley National Laboratory’s Computing Sciences Area has helped advance the field of cosmology and ready it for the age of exascale computing.

Nyx – an adaptive mesh, massively parallel cosmological simulation code designed to help study the universe at its grandest levels – has become an essential tool for research into some of its smallest, most detailed features as well, allowing for critical breakthroughs in the understanding of dark matter, dark energy, and the intergalactic medium.

Nyx traces its roots back to 2010, when Peter Nugent, department head for Computational Science in Berkeley Lab’s Computational Research Division (CRD), approached CRD senior scientist Ann Almgren with the prospect of adapting the Castro code, an adaptive mesh astrophysics simulation tool built on the AMReX software framework for cosmology. The pair hatched a plan to create an adaptive mesh code that would be able to represent dark matter as particles that could interact with hydrogen gas and also capture the expansion factor of the universe.

“It literally started with a conversation,” recalled Almgren, now the group lead of CRD’s Center for Computational Sciences and Engineering. “I still remember when Peter first raised the idea. The collaboration started with ‘hey, can you do this’?”

By 2011, funding from the Laboratory Directed Research and Development (LDRD) program enabled the team to start work on creating Nyx. Berkeley Lab’s LDRD program is designed to incubate emerging lab projects in their early stages, providing a bridge from concept to full-scale Department of Energy (DOE) funded projects.

Among the initial members of the Nyx team was Computational Cosmology Center research scientist Zarija Lukic, who took charge of creating the physics simulation elements of Nyx. Among other things, Lukic would help to author the 2013 paper that introduced Nyx to the scientific community and lead the code in the direction of intergalactic medium and Lyman alpha forest studies. Shortly after, Nyx transitioned from the LDRD program to the DOE’s Scientific Discovery through Advanced Computing (SciDAC) program, which links scientific application research efforts with high-performance computing (HPC) technology.

Nyx began to produce immediate results, and one of the code’s biggest advantages became clear: scalability. From its earliest days, Nyx was designed to take advantage of all types and scale of hardware on its host machine, and Nyx simulations have proved crucial in allowing cosmologists to produce models of the universe at unprecedented scale. Over time, this has allowed researchers to make the most of the supercomputers hosting it – from CPU- only systems to heterogeneous systems containing CPUs and GPUs.

“The biggest thing is our ability to scale,” said Nugent. “Because we can take advantage of the entire machine, CPU or GPU, we can occupy a very large memory footprint and do the largest of these types of simulations in terms of size of the universe at the highest resolutions.”

Exploring the Lyman-alpha Forest

This Nyx simulation, which is part of the artwork that will be displayed on Berkeley Lab’s newest supercomputer, reveals the cosmic web of dark matter and gas underpinning our visible universe.

One of the earliest large-scale applications for Nyx involved studies of the Lyman-alpha forest, which remains the main application area for the code. It is made up of a series of absorption lines created as the light from distant quasars located far outside the Milky Way travels billions of light years toward us, passing through the gas residing between galaxies. By examining the forest’s light spectrum and distortions caused as that light travels the vast distances to Earth, cosmologists can map the structure of the intergalactic gas to gain a better understanding of what the universe is made of, and what the universe looked like after the Big Bang. Perhaps most interestingly, the distortions in the light spectrum, as will be observed with the Dark Energy Spectroscopic Instrument (DESI) and high-resolution spectrographs like the one mounted on the Keck telescope, can provide insight into the nature of dark matter and neutrinos.

But simulations of the forest pose an immense computational challenge, as they require recreating both massive sectors of space – in some cases up to 500 million light years across – while also being able to calculate the behavior of small density fluctuations as light moves through the intergalactic medium.

Enter Nyx. Adaptive mesh refinement (AMR) allows a computer to determine for what part of the universe detailed calculations need to be performed and where more general, coarse results are accurate enough. This reduces the number of calculations and memory needed, and reduces the compute time for large, complex simulations. By utilizing components of AMReX, the code is able to scale up to model the vast volumes probed by the Lyman-alpha forest.

“In 2014 and 2015 we were running simulations that are still today’s state of the art,” Lukic said.

Another key aspect of Nyx’s popularity is that it is open source, which has been key to creating a larger community for the code outside of Berkeley Lab. Today research teams from all over are finding new applications for Nyx, employing the code for smaller-scale simulations and experiments. In some cases Nyx is used as is, and in other cases the source code is modified by these researchers to fit their own needs.

“People have used it to do simulations of single galaxies,” Nugent said. “People have used it to do simulations of much earlier in the universe and later in the universe.”

Ready for the Next Generation

As the scientific community prepares to move into the era of exascale computing, Nyx shows no signs of letting up. Ongoing development of the code is supported by the DOE’s Exascale Computing Project, and Nyx is slated to play a key supporting role in the highly anticipated DESI experiment, performing simulations to back up DESI’s observations of the role dark matter plays in supporting the expansion of the universe.

Even with the next-generation supercomputers that will be used for DESI, the Nyx code’s ability to make the most out of the hardware will be crucial for performing accurate simulations to verify results. Postdoctoral researcher Jean Sexton has spent much of the past year making sure Nyx will continue to be on the cutting edge and ready to tackle  the next round of problems.

“If you do not have good efficiency, scalability and physical accuracy you will not be able to produce simulations needed to get an accurate representation of the data,” said Lukic. “You are not going to be able to extract the scientific conclusions from future sky surveys.”

Nyx is also slated to, quite literally, feature front and center on Berkeley’s Lab’s newest supercomputer, Perlmutter, which will be located at the National Energy Research Scientific Computing Center (NERSC). When it is unveiled this year, Perlmutter will feature artwork generated by a Nyx simulation diagramming the filaments that connect large clusters of galaxies. The Nyx code will also likely be prominent inside Perlmutter and other next-generation supercomputers, including those at the exascale.

When all is said and done, Nyx will go down as a shining example of how Berkeley Lab’s  is able to develop a project from its infancy, through the LDRD program, into DOE funding, and finally to release for the larger scientific community. Over the span of 10 years, the Nyx code evolved from a conversation between Labs staffers to a mainstay in the field of cosmology and a key component of the next generation of high-performance computing systems and research into how the universe functions. For Almgren, who was there from the beginning, Nyx underlines one of the Lab’s greatest strengths.

“I think that is one of the things the lab does well: it allows people to make collaborations that advance science much more efficiently,” she said.

NERSC is a U.S. Department of Energy Office of Science user facility.


Source: Shaun Nichols, Lawrence Berkeley National Laboratory

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Research senior analyst Steve Conway, who closely tracks HPC, AI, Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, and this day of contemplation is meant to provide all of us Read more…

Intel Announces Hala Point – World’s Largest Neuromorphic System for Sustainable AI

April 22, 2024

As we find ourselves on the brink of a technological revolution, the need for efficient and sustainable computing solutions has never been more critical.  A computer system that can mimic the way humans process and s Read more…

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Resear Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire