A Dark Matter for Astrophysics Research

By Nicole Hemsoth

May 31, 2011

Back in 2008, the Sloan Digital Sky Survey (SDSS) came to an end, leaving behind hundreds of terabytes of publicly-available data that has since been used in a range of research projects. Based on this data, researchers have been able to discover distant quasars powered by supermassive black holes in the early universe, uncover collections of sub-stellar objects, and have mapped extended mass distributions around galaxies with weak gravitational fields.

Among the diverse groups of scientists tackling problems that can now be understood using the SDSS data is a team led by Dr. Risa Wechsler from Stanford University’s Department of Physics and the SLAC National Accelerator Laboratory.

Wechsler is interested in the process of galaxy formation, the development of universal structure, and what these can tell us about the fundamental physics of the universe. Naturally, dark energy and dark matter enter the equation when one is considering galactic formation and there are few better keys to probing these concepts than data generated from the SDSS.

Just as the Sloan Digital Sky Survey presented several new data storage and computational challenges, so too do the efforts to extract meaningful discoveries. Teasing apart important information for simulations and analysis generates its own string of terabytes on top of the initial SDSS data. This creates a dark matter of its own for computer scientists as they struggle to keep pace with ever-expanding volumes that are outpacing the capability of the systems designed to handle them.

Wechsler’s team used the project’s astronomical data to make comparisons in the relative luminosity of millions of galaxies to our own Milky Way. All told, the project took images of nearly one-quarter of the sky, creating its own data challenges. The findings revealed that galaxies with two satellites that are nearby with large and small Magellanic clouds are highly unique — only about four percent of galaxies have similarities to the Milky Way.

To arrive at their conclusions, the group downloaded all of the publicly available Sloan data and began looking for satellite galaxies around the Milky Way, combing through about a million galaxies with spectroscopy to select a mere 20,000 with luminosity similar to that of our own galaxy. With these select galaxies identified, they undertook the task of mining those images for evidence of nearby fainter galaxies via a random review method. As Wechsler noted, running on the Pleiades supercomputer at NASA Ames, it took roughly 6.5 million CPU hours to run a simulation of a region of the universe done with 8 billion particles, making it one of the largest simulations that has ever been done in terms of particle numbers. She said that when you move to smaller box sizes it takes a lot more CPU time per particle because the universe is more clustered on smaller scales.

Wechsler described the two distinct pipelines required for this type of reserach. First, there’s the simulation in which researchers spend time looking for galaxies in a model universe. Wechsler told us that this simulation was done on the Pleiades machine at Ames across 10,000 CPUs. From there, the team performed an analysis of this simulation, which shows the evolution of structure formations on the piece of the universe across its entire history of almost 14 billion years — a process that involves the examination of dark matter halo histories across history. As she noted, the team was “looking for gravitationally bound clumps in that dark matter distribution; you have a distribution of matter at a given time and you want to find the peaks in that density distribution since that is where we expect galaxies to form. We were looking for those types of peas across the 200 snapshots we tool to summarize that entire 14 billion year period.”

The team needed to understand the evolutionary processes that occurred between the many billions of years captured in 200 distinct moments. This meant they had to trace the particles from one snapshot to the next in their clumps, which are called dark matter halos. Once the team found the halos, which again, are associated with galaxy formation, they did a statistical analysis that sought out anything that looked like our own Milky Way. Wechsler told is that “the volume of the simulation was comparable to the volume of the data that we were looking at. Out of the 8 million or so total clumps in our simulation we found our set of 20,000 that looked like possibilities to compare to the Milky Way. By looking for fainter things around them — and remember there are a lot more faint things than bright ones — we were looking for many, many possibilities at one time.”

The computational challenges are abundant in a project like this Wechsler said. Out of all bottlenecks, storage has been the most persistent, although she noted that as of now there are no real solutions to these problems.

Aside from bottlenecks due to the massive storage requirements, Wechsler said that the other computational challenge was that even though this project represented one of the highest resolution simulations at such a volume, they require more power. She said that although they can do larger simulation in a lower resolution, getting the full dynamic range of the calculation is critical. This simulation breaks new ground in terms of being able to simulate Magellenic cloud size objects over a large volume, but it’s still smaller than the volume that the observations are able to probe. This means that scaling this kind calculation up to the next level is a major challenge, especially as Wechsler embarks on new projects.

“Our data challenges are the same as those in many other fields that are tackling multiscale problems. We have a wide dynamic range of statistics to deal with but what did enable us to do this simulation is being able to resolve many small objects in a large volume. For this and other research projects, having a wide dynamic range of scales is crucial so some of our lessons can certainly be carried over to other fields.”

As Alex Szalay friom the Johns Hopkins University Department of Physics and Astonomy noted, this is a prime example of the kinds of big data problems that researchers in astrophysics and other fields are facing. They are, as he told us, “forced to make tradoffs when they enter the extreme scale” and need to find ways to manage both storage and CPU resources so that these tradeoffs have the least possible impact on the overall time to solutions. Dr. Szalay addressed some of the specific challenges involved in Wechsler’s project in a recent presentation called “Extreme Databases-Centric Scientific Computing.” In the presentation he addresses the new scalable architectures required for data-intensive scientific applications, looking at the databases as the root point to begin exploring new solutions.

For the dark energy survey, the team will take images of about one-eighth of the sky going back seven billion years. The large synoptic survey telescope, which is currently being built will take images of the half the sky every three days and will provide even more faintness detection, detecting the brightest stars back to a few billion years after the big bang. One goal with this is to map where everything is in order to figure out what the universe is made of. Galaxy surveys help with this research because they can map the physics to large events via simulations to understand galactic evolution.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

AI-Focused ‘Genius’ Supercomputer Installed at KU Leuven

April 24, 2018

Hewlett Packard Enterprise has deployed a new approximately half-petaflops supercomputer, named Genius, at Flemish research university KU Leuven. The system is built to run artificial intelligence (AI) workloads and, as Read more…

By Tiffany Trader

New Exascale System for Earth Simulation Introduced

April 23, 2018

After four years of development, the Energy Exascale Earth System Model (E3SM) will be unveiled today and released to the broader scientific community this month. The E3SM project is supported by the Department of Energy Read more…

By Staff

RSC Reports 500Tflops, Hot Water Cooled System Deployed at JINR

April 18, 2018

RSC, developer of supercomputers and advanced HPC systems based in Russia, today reported deployment of “the world's first 100% ‘hot water’ liquid cooled supercomputer” at Joint Institute for Nuclear Research (JI Read more…

By Staff

HPE Extreme Performance Solutions

Hybrid HPC is Speeding Time to Insight and Revolutionizing Medicine

High performance computing (HPC) is a key driver of success in many verticals today, and health and life science industries are extensively leveraging these capabilities. Read more…

New Device Spots Quantum Particle ‘Fingerprint’

April 18, 2018

Majorana particles have been observed by university researchers employing a device consisting of layers of magnetic insulators on a superconducting material. The advance opens the door to controlling the elusive particle Read more…

By George Leopold

AI-Focused ‘Genius’ Supercomputer Installed at KU Leuven

April 24, 2018

Hewlett Packard Enterprise has deployed a new approximately half-petaflops supercomputer, named Genius, at Flemish research university KU Leuven. The system is Read more…

By Tiffany Trader

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’ Read more…

By John Russell

IBM: Software Ecosystem for OpenPOWER is Ready for Prime Time

April 16, 2018

With key pieces of the IBM/OpenPOWER versus Intel/x86 gambit settling into place – e.g., the arrival of Power9 chips and Power9-based systems, hyperscaler sup Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Cloud-Readiness and Looking Beyond Application Scaling

April 11, 2018

There are two aspects to consider when determining if an application is suitable for running in the cloud. The first, which we will discuss here under the title Read more…

By Chris Downing

Transitioning from Big Data to Discovery: Data Management as a Keystone Analytics Strategy

April 9, 2018

The past 10-15 years has seen a stark rise in the density, size, and diversity of scientific data being generated in every scientific discipline in the world. Key among the sciences has been the explosion of laboratory technologies that generate large amounts of data in life-sciences and healthcare research. Large amounts of data are now being stored in very large storage name spaces, with little to no organization and a general unease about how to approach analyzing it. Read more…

By Ari Berman, BioTeam, Inc.

IBM Expands Quantum Computing Network

April 5, 2018

IBM is positioning itself as a first mover in establishing the era of commercial quantum computing. The company believes in order for quantum to work, taming qu Read more…

By Tiffany Trader

FY18 Budget & CORAL-2 – Exascale USA Continues to Move Ahead

April 2, 2018

It was not pretty. However, despite some twists and turns, the federal government’s Fiscal Year 2018 (FY18) budget is complete and ended with some very positi Read more…

By Alex R. Larzelere

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

Leading Solution Providers

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This