3D Simulations Raise Bar for Astrophysics

By Tiffany Trader

July 3, 2014

For those outside the HPC/science realm who question why there need to be ever-more powerful supercomputers, one need only look at the amazing breakthroughs that the petascale age has facilitated. Astrophysics research out of Caltech is the latest example. Because of leadership-class systems like Stampede and Blue Waters and their experienced support staff, researchers from Caltech were able to perform fully 3D model simulations of supernova explosions.

The scientists are studying a somewhat rare phenomenon called extreme core-collapse supernovae. While these events comprise only one percent of all observed supernova, they are “extreme” in the amount of energy that’s emitted into space.

Up until recently, simulations in this field were mainly relegated to two dimensions (2D), and due to computational limitations codes could not incorporate all of the relevant physics, for example general relativistic effects were intentionally excluded. This study marks the first time that scientists are running fully general relativistic three dimensional (3D) simulations.

Because of the added realism, the research team, led by Philipp Mösta, postdoctoral scholar at Caltech, and Christian D. Ott, professor of astrophysics at Caltech, is discovering that previously held theories about how these explosions work might not be accurate.

The heart of the new finding is that the explosion is a highly dynamic process.

“What we’ve shown is that the jets that appear stable in 2D are actually unstable in 3D,” explained Mösta in an article by Liz Murray at the XSEDE website. “They twist, rotate and become unstable due to a phenomenon that is called the magneto-hydrodynamic kink instability. This instability of the magnetic field itself is the same that is also seen in fusion reactors that are using magnetic fields to confine the plasma.”

Supercomputing has been instrumental to the the project since it started in 2013 with an Extreme Science Engineering Discovery Environments (XSEDE) allocation on the Stampede supercomputer, installed at the Texas Advanced Computing Center (TACC) at The University of Texas at Austin.

The early work centered on code optimization, tweaking the code to take advantage of modern computing architectures. This crucial step enabled the team to run larger simulations without using up their alloted CPU hours too quickly.

“We were able to perform the first fully general relativistic 3D simulations without any symmetries and the difference in comparison to 2D was drastic,” stated Ott. “We now know if we want to predict what the signature of these extreme supernova explosions might look like, we need to do it in full 3D.”

After performing the initial general-relativistic magnetohydrodynamics (GRMHD) simulations on Stampede, the team hit a wall when trying to computationally reproduce the shockwave the extends out from the core of a massive star as it collapses to a proto-neutron star. To simulate this part of the process in 3D, they moved over to the Blue Waters supercomputer, a larger resource managed by the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign.

This short video below shows the time-evolution of the shock wave. The 2D simulation is depicted on the left, while the corresponding meridional slice from a 3D simulation is shown on the right.

The scientists credit both of these supercomputers with their foray into 3D simulation. After further refining their code and running additional simulations, their goal is to create full 3D kinetic models of these extreme supernova explosions. The project involves connecting the simulations to actual observations collected from one of the NASA satellite telescopes.

Aside from being a valuable breakthrough for human understanding of supernovas, the emergence of 3D simulation has greater implications according to Mösta. “It will probably indicate to other groups who, so far, have focused on performing simulations with symmetries imposed, that they will have to move to full 3D simulations as well, which will ultimately strengthen our community,” he stated.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

Microsoft Closes Confidential Computing Loop with AMD’s Milan Chip

September 22, 2022

Microsoft shared details on how it uses an AMD technology to secure artificial intelligence as it builds out a secure AI infrastructure in its Azure cloud service. Microsoft has a strong relationship with Nvidia, but is also working with AMD's Epyc chips (including the new 3D VCache series), MI Instinct accelerators, and also... Read more…

Nvidia Introduces New Ada Lovelace GPU Architecture, OVX Systems, Omniverse Cloud

September 20, 2022

In his GTC keynote today, Nvidia CEO Jensen Huang launched another new Nvidia GPU architecture: Ada Lovelace, named for the legendary mathematician regarded as the first computer programmer. The company also announced tw Read more…

Nvidia’s Hopper GPUs Enter ‘Full Production,’ DGXs Delayed Until Q1

September 20, 2022

Just about six months ago, Nvidia’s spring GTC event saw the announcement of its hotly anticipated Hopper GPU architecture. Now, the GPU giant is announcing that Hopper-generation GPUs (which promise greater energy eff Read more…

NeMo LLM Service: Nvidia’s First Cloud Service Makes AI Less Vague

September 20, 2022

Nvidia is trying to uncomplicate AI with a cloud service that makes AI and its many forms of computing less vague and more conversational. The NeMo LLM service, which Nvidia called its first cloud service, adds a layer of intelligence and interactivity... Read more…

AWS Solution Channel

Shutterstock 1194728515

Simulating 44-Qubit quantum circuits using AWS ParallelCluster

Dr. Fabio Baruffa, Sr. HPC & QC Solutions Architect
Dr. Pavel Lougovski, Pr. QC Research Scientist
Tyson Jones, Doctoral researcher, University of Oxford

Introduction

Currently, an enormous effort is underway to develop quantum computing hardware capable of scaling to hundreds, thousands, and even millions of physical (non-error-corrected) qubits. Read more…

Microsoft/NVIDIA Solution Channel

Shutterstock 1166887495

Improving Insurance Fraud Detection using AI Running on Cloud-based GPU-Accelerated Systems

Insurance is a highly regulated industry that is evolving as the industry faces changing customer expectations, massive amounts of data, and increased regulations. A major issue facing the industry is tracking insurance fraud. Read more…

Nvidia Targets Computers for Robots in the Surgery Rooms

September 20, 2022

Nvidia is laying the groundwork for a future in which humans and robots will be collaborators in the surgery rooms at hospitals. The company announced a computer called IGX for Medical Devices, which will be populated in robots, image scanners and other computers and medical devices involved in patient care close to the point... Read more…

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

Nvidia Introduces New Ada Lovelace GPU Architecture, OVX Systems, Omniverse Cloud

September 20, 2022

In his GTC keynote today, Nvidia CEO Jensen Huang launched another new Nvidia GPU architecture: Ada Lovelace, named for the legendary mathematician regarded as Read more…

Nvidia’s Hopper GPUs Enter ‘Full Production,’ DGXs Delayed Until Q1

September 20, 2022

Just about six months ago, Nvidia’s spring GTC event saw the announcement of its hotly anticipated Hopper GPU architecture. Now, the GPU giant is announcing t Read more…

NeMo LLM Service: Nvidia’s First Cloud Service Makes AI Less Vague

September 20, 2022

Nvidia is trying to uncomplicate AI with a cloud service that makes AI and its many forms of computing less vague and more conversational. The NeMo LLM service, which Nvidia called its first cloud service, adds a layer of intelligence and interactivity... Read more…

Nvidia Targets Computers for Robots in the Surgery Rooms

September 20, 2022

Nvidia is laying the groundwork for a future in which humans and robots will be collaborators in the surgery rooms at hospitals. The company announced a computer called IGX for Medical Devices, which will be populated in robots, image scanners and other computers and medical devices involved in patient care close to the point... Read more…

Survey Results: PsiQuantum, ORNL, and D-Wave Tackle Benchmarking, Networking, and More

September 19, 2022

The are many issues in quantum computing today – among the more pressing are benchmarking, networking and development of hybrid classical-quantum approaches. Read more…

HPC + AI Wall Street to Feature ‘Spooky’ Science for Financial Services

September 18, 2022

Albert Einstein famously described quantum mechanics as "spooky action at a distance" due to the non-intuitive nature of superposition and quantum entangled par Read more…

Analog Chips Find a New Lease of Life in Artificial Intelligence

September 17, 2022

The need for speed is a hot topic among participants at this week’s AI Hardware Summit – larger AI language models, faster chips and more bandwidth for AI machines to make accurate predictions. But some hardware startups are taking a throwback approach for AI computing to counter the more-is-better... Read more…

AWS Takes the Short and Long View of Quantum Computing

August 30, 2022

It is perhaps not surprising that the big cloud providers – a poor term really – have jumped into quantum computing. Amazon, Microsoft Azure, Google, and th Read more…

The Final Frontier: US Has Its First Exascale Supercomputer

May 30, 2022

In April 2018, the U.S. Department of Energy announced plans to procure a trio of exascale supercomputers at a total cost of up to $1.8 billion dollars. Over the ensuing four years, many announcements were made, many deadlines were missed, and a pandemic threw the world into disarray. Now, at long last, HPE and Oak Ridge National Laboratory (ORNL) have announced that the first of those... Read more…

US Senate Passes CHIPS Act Temperature Check, but Challenges Linger

July 19, 2022

The U.S. Senate on Tuesday passed a major hurdle that will open up close to $52 billion in grants for the semiconductor industry to boost manufacturing, supply chain and research and development. U.S. senators voted 64-34 in favor of advancing the CHIPS Act, which sets the stage for the final consideration... Read more…

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

Top500: Exascale Is Officially Here with Debut of Frontier

May 30, 2022

The 59th installment of the Top500 list, issued today from ISC 2022 in Hamburg, Germany, officially marks a new era in supercomputing with the debut of the first-ever exascale system on the list. Frontier, deployed at the Department of Energy’s Oak Ridge National Laboratory, achieved 1.102 exaflops in its fastest High Performance Linpack run, which was completed... Read more…

Chinese Startup Biren Details BR100 GPU

August 22, 2022

Amid the high-performance GPU turf tussle between AMD and Nvidia (and soon, Intel), a new, China-based player is emerging: Biren Technology, founded in 2019 and headquartered in Shanghai. At Hot Chips 34, Biren co-founder and president Lingjie Xu and Biren CTO Mike Hong took the (virtual) stage to detail the company’s inaugural product: the Biren BR100 general-purpose GPU (GPGPU). “It is my honor to present... Read more…

Newly-Observed Higgs Mode Holds Promise in Quantum Computing

June 8, 2022

The first-ever appearance of a previously undetectable quantum excitation known as the axial Higgs mode – exciting in its own right – also holds promise for developing and manipulating higher temperature quantum materials... Read more…

AMD’s MI300 APUs to Power Exascale El Capitan Supercomputer

June 21, 2022

Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Read more…

Leading Solution Providers

Contributors

Tesla Bulks Up Its GPU-Powered AI Super – Is Dojo Next?

August 16, 2022

Tesla has revealed that its biggest in-house AI supercomputer – which we wrote about last year – now has a total of 7,360 A100 GPUs, a nearly 28 percent uplift from its previous total of 5,760 GPUs. That’s enough GPU oomph for a top seven spot on the Top500, although the tech company best known for its electric vehicles has not publicly benchmarked the system. If it had, it would... Read more…

Exclusive Inside Look at First US Exascale Supercomputer

July 1, 2022

HPCwire takes you inside the Frontier datacenter at DOE's Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tenn., for an interview with Frontier Project Direc Read more…

AMD Opens Up Chip Design to the Outside for Custom Future

June 15, 2022

AMD is getting personal with chips as it sets sail to make products more to the liking of its customers. The chipmaker detailed a modular chip future in which customers can mix and match non-AMD processors in a custom chip package. "We are focused on making it easier to implement chips with more flexibility," said Mark Papermaster, chief technology officer at AMD during the analyst day meeting late last week. Read more…

Intel Reiterates Plans to Merge CPU, GPU High-performance Chip Roadmaps

May 31, 2022

Intel reiterated it is well on its way to merging its roadmap of high-performance CPUs and GPUs as it shifts over to newer manufacturing processes and packaging technologies in the coming years. The company is merging the CPU and GPU lineups into a chip (codenamed Falcon Shores) which Intel has dubbed an XPU. Falcon Shores... Read more…

Nvidia, Intel to Power Atos-Built MareNostrum 5 Supercomputer

June 16, 2022

The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Inte Read more…

UCIe Consortium Incorporates, Nvidia and Alibaba Round Out Board

August 2, 2022

The Universal Chiplet Interconnect Express (UCIe) consortium is moving ahead with its effort to standardize a universal interconnect at the package level. The c Read more…

Using Exascale Supercomputers to Make Clean Fusion Energy Possible

September 2, 2022

Fusion, the nuclear reaction that powers the Sun and the stars, has incredible potential as a source of safe, carbon-free and essentially limitless energy. But Read more…

Is Time Running Out for Compromise on America COMPETES/USICA Act?

June 22, 2022

You may recall that efforts proposed in 2020 to remake the National Science Foundation (Endless Frontier Act) have since expanded and morphed into two gigantic bills, the America COMPETES Act in the U.S. House of Representatives and the U.S. Innovation and Competition Act in the U.S. Senate. So far, efforts to reconcile the two pieces of legislation have snagged and recent reports... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire