Star Maker Machinery

By Tim Palucka

September 1, 2006

How are stars made? — no, not pop stars and movie stars, the other kind, the ones that bring light to cosmic darkness. Like people, the brilliant light points in space have a birth-to-death cycle, and their birth is a tempestuous, uncertain process.

A star comes into being when a region of cold gas in a galaxy collapses — like a basketball contracting to the size of a dot — until the core gets so dense that the atoms begin to fuse. Astrophysicists mark the onset of nuclear fusion — when the thermonuclear furnace at a star's core starts to heat up — as the moment of star birth.

Not all galactic regions of collapsing gas, however, result in newborn stars. One scenario sees the birth process as a kind of competition between energies. Swirling, turbulent gas as it collapses builds up tremendous pressure, which — as it becomes dense enough — sends a sound wave outward. If this outward pressure wave travels fast enough, it can stop the collapse. On the other hand, if the collapse happens faster than the pressure wave's ability to slow it, known as Jeans instability, nuclear fusion starts and voilá — a new star.

Underlying this simplified scenario, however, is a complex stew of physical processes, with many factors involved, including temperature of the gas, its chemical composition, its magnetization, and the rate at which the collapsing gas cools. Depending on how galaxy behavior is understood, scientists have proposed numerous theories of star birth and have long sought a clear explanation.

Why do we care how stars form? Simple, says Mordecai-Mark Mac Low, an astrophysicist at the American Museum of Natural History in New York. Without star birth, there wouldn't be life. “Ultimately our research tells us why we are here,” he says. “In our models of star formation, we're basically trying to figure out how the galaxy we live in and all the galaxies we see around us behave. And how does that behavior contribute to our own presence?”

In recent work, Mac Low and colleagues Yuexing Li of Columbia University (now at the Harvard-Smithsonian Center for Astrophysics) and Ralf S. Klessen of the Astrophysical Institute of Potsdam (now at the University of Heidelberg) used LeMieux, PSC's terascale system, to simulate billions of years of galactic evolution. Their results cut through the galactic fog and reduce a complex story to one key element.

“Gravitational instability,” says Mac Low, “appears to be the dominant mechanism controlling the formation of stars.”

Sink Particles

To get to this conclusion, Mac Low's team modeled the matter within galaxies as particles, using a method called “smoothed particle hydrodynamics.” They implemented this approach with simulation software, called GADGET, developed by Volker Springel at the Max Planck Institute for Astrophysics in Garching, Germany. Instead of overlaying a fixed grid and monitoring changes within each cube of the grid — a common approach to modeling movement of objects in space — GADGET scatters particles across the galaxy, with each particle assigned an initial density, pressure, and velocity. The simulations track these particles — their changes in position, density, pressure and velocity — through billions of simulation years.

“Instead of a regular grid where the resolution is fixed everywhere,” says Mac Low, “you have an unstructured grid, where the resolution follows the gas flow. This is good if you're looking at problems of collapse, because you put the most resolution in the densest regions.”

Along with this advantage, however, the particle method imposes a challenging computational problem. When collapse starts, the particles crowd closer and closer, in tighter and tighter orbits about each other. Maintaining high resolution in such a region requires more and more computation to advance the same amount of physical time, which eventually leads to an impasse — to advance a year of galactic time can require a year of computation.

The solution is to let collapse proceed until it's certain the collapsing region will achieve critical stellar density, then replace the thousands of gas particles in this region with a single absorbing particle of the same mass and velocity — called a “sink” particle because it acts as a sink, as opposed to a source, of mass. In that region, LeMieux now has to track only one particle, instead of thousands. By measuring the mass of a sink particle, scientists can quantify how much gas has collapsed to form a star cluster. “Effectively it becomes,” says Mac Low, “a star particle.”

Mac Low first worked with sink particles on a cluster computer at the American Museum of Natural History before he approached PSC for time on LeMieux. “You don't want to get on a high-performance machine,” he says, “until you know where you're going and what you want to accomplish. Once we started our million particle runs, to do it right we needed something like LeMieux.”

Their model galaxies comprise a disk of stars and uniform temperature (isothermal) gas surrounded by a spherical “halo” of dark matter; picture a globe filled with dark matter and a swirling disk of gas and stars at the equator. Through a series of about 20 simulations of single galaxies, over nearly two years starting with the AMNH cluster and then with LeMieux, they varied the number of gas particles from one to six million, and they varied other parameters — the fraction of gas, the size and rotation rate of the galaxies, effectively varying the strength of gravitational instability — and observed the effect on star birth.

Gravity Rules

The results show that star particles form more readily in regions that are more gravitationally unstable. In disk galaxies, gravitational instability is known as Toomre (pronounced Toom-ray) instability, for Alar Toomre, who first described it in 1964.

The Toomre gravitational instability parameter quantifies how sensitive a region of gas is to changes in local conditions. If additional gas is added to the region, or the strength of rotational shear changes, how likely is it that this will initiate collapse? Regions with an instability parameter above 1.0 are relatively stable and vice-versa.

Two factors bear on Toomre instability: pressure support and shear support. Pressure support involves a sound wave traveling outward through a collapsing region, as described earlier. If the gas collapses faster than the sound wave can stabilize it, the region becomes pressure unstable, meeting the Jeans instability criterion.

The other factor is shear support, which takes into account the “differential rotation” of material orbiting in a disk. Particles close to the center of the disk revolve faster than particles at the outer edges, just as in our Solar System where planets distant from the Sun travel more slowly along their orbit than planets close to the Sun. Because of differential rotation, gas on one side of a collapsing region can shear away from gas on the other side before collapse occurs, preventing star formation. For Toomre instability, the gas region must collapse fast enough so that (1) sound waves can't provide pressure support, and (2) shear doesn't tear the region apart before it collapses.

Mac Low's simulations with LeMieux show that sink particles form more readily in regions where the Toomre gravitational instability parameter is smaller. This is true regardless of changes in other variables — galaxy size, quantity of gas particles, rotation rate and gas fraction of the galactic disk. The simulations identify an exponential relationship between the rate of star birth and the Toomre instability parameter. Therefore, Mac Low concludes, Toomre instability alone is sufficient to explain star formation.

This conclusion departs from a number of previous theories. Some theorists believe that cooling is key — if you cool the gas in a galactic disk to a low enough temperature, star formation will inevitably occur. Others argue that magnetic support is crucial, with star formation occurring only in regions sufficiently neutral to decouple from the magnetic field.

Mac Low's results, however, strongly suggest that these ideas are due for rethinking. “I'm arguing,” he says, “that cooling is incidental. The first thing you do is start the collapse, and if you raise the densities high enough, the cooling will happen very quickly, more or less regardless of details. Similarly, if a massive enough region collapses, magnetic support can simply be overwhelmed.”

To follow-up on these findings, Mac Low and his colleagues plan to simulate galaxies at finer and finer resolution until they reach the resolution of an individual star. Currently, he and M. K. Ryan Joung of Columbia University are using 1,000 LeMieux processors to simulate a small fraction of a galaxy — to test whether supernovas act as galactic “stirrers” that stir and heat gas, impeding collapse.

Better knowledge of how a star is born, he says, helps us to comprehend “the grand history that ends up producing a kind of average star two-thirds of the way out in a larger-than-ordinary galaxy, a star that happens to have planets around it — one of which we live on.”

For more information, including graphics: http://www.psc.edu/science/2006/starmaker/starmaker.php

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Supercomputer Research Reveals Star Cluster Born Outside Our Galaxy

July 11, 2020

The Milky Way is our galactic home, containing our solar system and continuing into a giant band of densely packed stars that stretches across clear night skies around the world – but, it turns out, not all of those st Read more…

By Oliver Peckham

Max Planck Society Begins Installation of Liquid-Cooled Supercomputer from Lenovo

July 9, 2020

Lenovo announced today that it is supplying a new high performance computer to the Max Planck Society, one of Germany's premier research organizations. Comprised of Intel Xeon processors and Nvidia A100 GPUs, and featuri Read more…

By Tiffany Trader

Xilinx Announces First Adaptive Computing Challenge

July 9, 2020

A new contest is challenging the computing world. Xilinx has announced the first Xilinx Adaptive Computing Challenge, a competition that will task developers and startups with finding creative workload acceleration solutions. Xilinx is running the Adaptive Computing Challenge in partnership with Hackster.io, a developing community... Read more…

By Staff report

Reviving Moore’s Law? LBNL Researchers See Promise in Heterostructure Oxides

July 9, 2020

The reality of Moore’s law’s decline is no longer doubted for good empirical reasons. That said, never say never. Recent work by Lawrence Berkeley National Laboratory researchers suggests heterostructure oxides may b Read more…

By John Russell

President’s Council Targets AI, Quantum, STEM; Recommends Spending Growth

July 9, 2020

Last week the President Council of Advisors on Science and Technology (PCAST) met (webinar) to review policy recommendations around three sub-committee reports: 1) Industries of the Future (IotF), chaired be Dario Gil (d Read more…

By John Russell

AWS Solution Channel

Best Practices for Running Computational Fluid Dynamics (CFD) Workloads on AWS

The scalable nature and variable demand of CFD workloads makes them well-suited for a cloud computing environment. Many of the AWS instance types, such as the compute family instance types, are designed to include support for this type of workload.  Read more…

Intel® HPC + AI Pavilion

Supercomputing the Pandemic: Scientific Community Tackles COVID-19 from Multiple Perspectives

Since their inception, supercomputers have taken on the biggest, most complex, and most data-intensive computing challenges—from confirming Einstein’s theories about gravitational waves to predicting the impacts of climate change. Read more…

Penguin Computing Brings Cascade Lake-AP to OCP Form Factor

July 7, 2020

Penguin Computing, a subsidiary of SMART Global Holdings, Inc., announced yesterday (July 6) a new Tundra server, Tundra AP, that is the first to implement the Intel Xeon Scalable 9200 series processors (codenamed Cascad Read more…

By Tiffany Trader

Max Planck Society Begins Installation of Liquid-Cooled Supercomputer from Lenovo

July 9, 2020

Lenovo announced today that it is supplying a new high performance computer to the Max Planck Society, one of Germany's premier research organizations. Comprise Read more…

By Tiffany Trader

President’s Council Targets AI, Quantum, STEM; Recommends Spending Growth

July 9, 2020

Last week the President Council of Advisors on Science and Technology (PCAST) met (webinar) to review policy recommendations around three sub-committee reports: Read more…

By John Russell

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

Q&A: HLRS’s Bastian Koller Tackles HPC and Industry in Germany and Europe

July 6, 2020

In this exclusive interview for HPCwire – sadly not face to face – Steve Conway, senior advisor for Hyperion Research, talks with Dr.-Ing Bastian Koller about the state of HPC and its collaboration with Industry in Europe. Koller is a familiar figure in HPC. He is the managing director at High Performance Computing Center Stuttgart (HLRS) and also serves... Read more…

By Steve Conway, Hyperion

OpenPOWER Reboot – New Director, New Silicon Partners, Leveraging Linux Foundation Connections

July 2, 2020

Earlier this week the OpenPOWER Foundation announced the contribution of IBM’s A21 Power processor core design to the open source community. Roughly this time Read more…

By John Russell

Hyperion Forecast – Headwinds in 2020 Won’t Stifle Cloud HPC Adoption or Arm’s Rise

June 30, 2020

The semiannual taking of HPC’s pulse by Hyperion Research – late fall at SC and early summer at ISC – is a much-watched indicator of things come. This yea Read more…

By John Russell

Racism and HPC: a Special Podcast

June 29, 2020

Promoting greater diversity in HPC is a much-discussed goal and ostensibly a long-sought goal in HPC. Yet it seems clear HPC is far from achieving this goal. Re Read more…

Top500 Trends: Movement on Top, but Record Low Turnover

June 25, 2020

The 55th installment of the Top500 list saw strong activity in the leadership segment with four new systems in the top ten and a crowning achievement from the f Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

Leading Solution Providers

Contributors

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This