A Dark Matter for Astrophysics Research

By Nicole Hemsoth

May 31, 2011

Back in 2008, the Sloan Digital Sky Survey (SDSS) came to an end, leaving behind hundreds of terabytes of publicly-available data that has since been used in a range of research projects. Based on this data, researchers have been able to discover distant quasars powered by supermassive black holes in the early universe, uncover collections of sub-stellar objects, and have mapped extended mass distributions around galaxies with weak gravitational fields.

Among the diverse groups of scientists tackling problems that can now be understood using the SDSS data is a team led by Dr. Risa Wechsler from Stanford University’s Department of Physics and the SLAC National Accelerator Laboratory.

Wechsler is interested in the process of galaxy formation, the development of universal structure, and what these can tell us about the fundamental physics of the universe. Naturally, dark energy and dark matter enter the equation when one is considering galactic formation and there are few better keys to probing these concepts than data generated from the SDSS.

Just as the Sloan Digital Sky Survey presented several new data storage and computational challenges, so too do the efforts to extract meaningful discoveries. Teasing apart important information for simulations and analysis generates its own string of terabytes on top of the initial SDSS data. This creates a dark matter of its own for computer scientists as they struggle to keep pace with ever-expanding volumes that are outpacing the capability of the systems designed to handle them.

Wechsler’s team used the project’s astronomical data to make comparisons in the relative luminosity of millions of galaxies to our own Milky Way. All told, the project took images of nearly one-quarter of the sky, creating its own data challenges. The findings revealed that galaxies with two satellites that are nearby with large and small Magellanic clouds are highly unique — only about four percent of galaxies have similarities to the Milky Way.

To arrive at their conclusions, the group downloaded all of the publicly available Sloan data and began looking for satellite galaxies around the Milky Way, combing through about a million galaxies with spectroscopy to select a mere 20,000 with luminosity similar to that of our own galaxy. With these select galaxies identified, they undertook the task of mining those images for evidence of nearby fainter galaxies via a random review method. As Wechsler noted, running on the Pleiades supercomputer at NASA Ames, it took roughly 6.5 million CPU hours to run a simulation of a region of the universe done with 8 billion particles, making it one of the largest simulations that has ever been done in terms of particle numbers. She said that when you move to smaller box sizes it takes a lot more CPU time per particle because the universe is more clustered on smaller scales.

Wechsler described the two distinct pipelines required for this type of reserach. First, there’s the simulation in which researchers spend time looking for galaxies in a model universe. Wechsler told us that this simulation was done on the Pleiades machine at Ames across 10,000 CPUs. From there, the team performed an analysis of this simulation, which shows the evolution of structure formations on the piece of the universe across its entire history of almost 14 billion years — a process that involves the examination of dark matter halo histories across history. As she noted, the team was “looking for gravitationally bound clumps in that dark matter distribution; you have a distribution of matter at a given time and you want to find the peaks in that density distribution since that is where we expect galaxies to form. We were looking for those types of peas across the 200 snapshots we tool to summarize that entire 14 billion year period.”

The team needed to understand the evolutionary processes that occurred between the many billions of years captured in 200 distinct moments. This meant they had to trace the particles from one snapshot to the next in their clumps, which are called dark matter halos. Once the team found the halos, which again, are associated with galaxy formation, they did a statistical analysis that sought out anything that looked like our own Milky Way. Wechsler told is that “the volume of the simulation was comparable to the volume of the data that we were looking at. Out of the 8 million or so total clumps in our simulation we found our set of 20,000 that looked like possibilities to compare to the Milky Way. By looking for fainter things around them — and remember there are a lot more faint things than bright ones — we were looking for many, many possibilities at one time.”

The computational challenges are abundant in a project like this Wechsler said. Out of all bottlenecks, storage has been the most persistent, although she noted that as of now there are no real solutions to these problems.

Aside from bottlenecks due to the massive storage requirements, Wechsler said that the other computational challenge was that even though this project represented one of the highest resolution simulations at such a volume, they require more power. She said that although they can do larger simulation in a lower resolution, getting the full dynamic range of the calculation is critical. This simulation breaks new ground in terms of being able to simulate Magellenic cloud size objects over a large volume, but it’s still smaller than the volume that the observations are able to probe. This means that scaling this kind calculation up to the next level is a major challenge, especially as Wechsler embarks on new projects.

“Our data challenges are the same as those in many other fields that are tackling multiscale problems. We have a wide dynamic range of statistics to deal with but what did enable us to do this simulation is being able to resolve many small objects in a large volume. For this and other research projects, having a wide dynamic range of scales is crucial so some of our lessons can certainly be carried over to other fields.”

As Alex Szalay friom the Johns Hopkins University Department of Physics and Astonomy noted, this is a prime example of the kinds of big data problems that researchers in astrophysics and other fields are facing. They are, as he told us, “forced to make tradoffs when they enter the extreme scale” and need to find ways to manage both storage and CPU resources so that these tradeoffs have the least possible impact on the overall time to solutions. Dr. Szalay addressed some of the specific challenges involved in Wechsler’s project in a recent presentation called “Extreme Databases-Centric Scientific Computing.” In the presentation he addresses the new scalable architectures required for data-intensive scientific applications, looking at the databases as the root point to begin exploring new solutions.

For the dark energy survey, the team will take images of about one-eighth of the sky going back seven billion years. The large synoptic survey telescope, which is currently being built will take images of the half the sky every three days and will provide even more faintness detection, detecting the brightest stars back to a few billion years after the big bang. One goal with this is to map where everything is in order to figure out what the universe is made of. Galaxy surveys help with this research because they can map the physics to large events via simulations to understand galactic evolution.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

LLNL Leverages Supercomputing to Identify COVID-19 Antibody Candidates

March 30, 2020

As COVID-19 sweeps the globe to devastating effect, supercomputers around the world are spinning up to fight back by working on diagnosis, epidemiology, treatment and vaccine development. Now, Lawrence Livermore National Read more…

By Staff report

Weather at Exascale: Load Balancing for Heterogeneous Systems

March 30, 2020

The first months of 2020 were dominated by weather and climate supercomputing news, with major announcements coming from the UK, the European Centre for Medium-Range Weather Forecasts and the U.S. National Oceanic and At Read more…

By Oliver Peckham

Q&A Part Two: ORNL’s Pooser on Progress in Quantum Communication

March 30, 2020

Quantum computing seems to get more than its fair share of attention compared to quantum communication. That’s despite the fact that quantum networking may be nearer to becoming a practical reality. In this second inst Read more…

By John Russell

SiFive Accelerates Chip Design with Cloud Tools

March 25, 2020

Chip designers are drawing on new cloud resources along with conventional electronic design automation (EDA) tools to accelerate IC templates from tape-out to custom silicon. Among the challengers to chip design leade Read more…

By George Leopold

What’s New in Computing vs. COVID-19: White House Initiative, Frontera, RIKEN & More

March 25, 2020

Supercomputing, big data and artificial intelligence are crucial tools in the fight against the coronavirus pandemic. Around the world, researchers, corporations and governments are urgently devoting their computing reso Read more…

By Oliver Peckham

AWS Solution Channel

Amazon FSx for Lustre Update: Persistent Storage for Long-Term, High-Performance Workloads

Last year I wrote about Amazon FSx for Lustre and told you how our customers can use it to create pebibyte-scale, highly parallel POSIX-compliant file systems that serve thousands of simultaneous clients driving millions of IOPS (Input/Output Operations per Second) with sub-millisecond latency. Read more…

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its scope and operation in a briefing led by Undersecretary of Ener Read more…

By John Russell

Weather at Exascale: Load Balancing for Heterogeneous Systems

March 30, 2020

The first months of 2020 were dominated by weather and climate supercomputing news, with major announcements coming from the UK, the European Centre for Medium- Read more…

By Oliver Peckham

Q&A Part Two: ORNL’s Pooser on Progress in Quantum Communication

March 30, 2020

Quantum computing seems to get more than its fair share of attention compared to quantum communication. That’s despite the fact that quantum networking may be Read more…

By John Russell

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Conversation: ANL’s Rick Stevens on DoE’s AI for Science Project

March 23, 2020

With release of the Department of Energy’s AI for Science report in late February, the effort to build a national AI program, modeled loosely on the U.S. Exascale Initiative, enters a new phase. Project leaders have already had early discussions with Congress... Read more…

By John Russell

Servers Headed to Junkyard Find 2nd Life Fighting Cancer in Clusters

March 20, 2020

Ottawa-based charitable organization Cancer Computer is on a mission to stamp out cancer and other life-threatening diseases, including coronavirus, by putting Read more…

By Tiffany Trader

Kubernetes and HPC Applications in Hybrid Cloud Environments – Part II

March 19, 2020

With the rise of cloud services, CIOs are recognizing that applications, middleware, and infrastructure running in various compute environments need a common management and operating model. Maintaining different application and middleware stacks on-premises and in cloud environments, by possibly using different specialized infrastructure and application... Read more…

By Daniel Gruber,Burak Yenier and Wolfgang Gentzsch, UberCloud

Intel’s Neuromorphic Chip Scales Up (and It Smells)

March 18, 2020

Neuromorphic chips attempt to directly mimic the behavior of the human brain. Intel, which introduced its Loihi neuromorphic chip in 2017, has just announced that Loihi has been scaled up into a system that simulates over 100 million neurons. Furthermore, it announced that the chip smells. Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

IBM Unveils Latest Achievements in AI Hardware

December 13, 2019

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energet Read more…

By Oliver Peckham

IBM Debuts IC922 Power Server for AI Inferencing and Data Management

January 28, 2020

IBM today launched a Power9-based inference server – the IC922 – that features up to six Nvidia T4 GPUs, PCIe Gen 4 and OpenCAPI connectivity, and can accom Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

University of Stuttgart Inaugurates ‘Hawk’ Supercomputer

February 20, 2020

This week, the new “Hawk” supercomputer was inaugurated in a ceremony at the High-Performance Computing Center of the University of Stuttgart (HLRS). Offici Read more…

By Staff report

Summit Joins the Fight Against the Coronavirus

March 6, 2020

With the coronavirus sweeping the globe, tech conferences and supply chains are being hit hard – but now, tech is hitting back. Oak Ridge National Laboratory Read more…

By Staff report

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This