A Big Data Journey While Seeking to Catalog our Universe

By James Reinders

January 16, 2019

It turns out, astronomers have lots of photos of the sky but seek knowledge about what the photos mean. Sound familiar? Big data problems are often characterized as transforming data into insights – which is exactly what some ambitious scientists are working to do with “Sky Survey” data. A Sky Survey is essentially astronomer speak for “lots and lots of images taken by telescopes, along with information of when and where they were taken.”

The Celeste collaboration is a group of scientists who have worked to catalog the visible universe in a way never before accomplished. They seek to create and refine a catalog which can detail the placement and characteristics (such as brightness and rotation) of every visible object in the sky.

Along the way, the Celeste collaboration has already proven that one high productive language (Julia) can offer high performance “at scale” (using hundreds of thousands of processor cores for compute), and their success certainly indicates that we will see more “at scale” big data work.

Journey of the Photons

No amount of effort to design an amazing telescope can overcome the effects that a very long journey has had upon the photons. Putting a telescope into orbit might cut out the last few hundred miles through our atmosphere, but that is just the tip of the iceberg when it comes to figuring out what each photo means. The techniques being developed by the Celeste collaboration are applicable to data regardless of whether it is earth-based or space-based.  So far, the earth-based data has supplied plenty of work to do.

Aside from inherent limitations of any sensing device in a telescope, the final image we get from a telescope is imperfect on account of point spread from the atmosphere, diffraction spikes from the telescope, and gravitational lensing that has occurred along the journey, among other causes. The Celeste collaboration has plugged away at addressing such challenges in their quest to build their meaningful catalog. As I have learned more about all they have done, I have been both amazed with the magnitude of their accomplishments and in awe of the enormous scope of future work that is possible. A truly big data project, Celeste has an insatiable appetite for more data, and for more sophisticated analysis work.

Lots of Compute, and Lots of (High Productivity) Programming

Collecting all known data about the visible universe into a meaningful model certainly is a big data problem. Celeste collaborators’ computational work has landed in the petascale world, meaning they have performed computations at a rate exceeding a thousand million million (1015) floating-point operations per second. They did this with over nine thousand CPUs, a high productivity language called Julia, and a 178 terabyte dataset representing 188 million stars and galaxies. Processing also involved intensive I/O due to the multiple passes over the dataset processed during a 14.6-minute run on the Cori supercomputer.

They did not use FORTRAN or C++ as the language for this task. Instead, they choose a high productivity language out of MIT known as Julia, and used it to very efficiently utilize Intel processors at a petascale. Specifically, they used 1.3 million threads on 9,300 Intel Xeon Phi processors (650,000 cores) to achieve 1.54 petaflops peak performance. This was the first showing of Julia at petascale, and it certainly will not be the last.

The Julia programming language developers explain Julia by saying: “Julia excels at numerical computing. Julia was designed from the beginning for high performance. Its syntax is great for math, many numeric datatypes are supported, and parallelism is available out of the box. Julia’s multiple dispatch is a natural fit for defining number and array-like datatypes.”

Keys to High-Performance Julia

The developers of the Celeste code have a few Julia-specific tips for making sure Julia is competitive with other compiled languages for high performance. Their tips were:

  1. Follow the performance tips given with Julia (no global state/eval/etc. in hotspots).
  2. Type stability (dynamic re-typing might seem cool, but it kills performance).
  3. Minimize dynamic memory allocations; use memory profiles to find allocations to reduce (double benefit: less time allocating also means less time doing garbage collection).

The final tip may be especially important in languages with garbage collection, but it is a great suggestion for programmers in all languages. Similarly, avoiding global state (the first tip) has enormous merit outside Julia as well.

Finally, the developers stress the need to profile to find and optimize hotspots. Hardly a Julia specific tip!  All in all, the experience of the developers with Julia mostly resembled the experience of any HPC programmer using C, C++, and Fortran. They would say that Julia offers a more productive programming environment, but also offers performance you would not find with other high productive languages such as Python. Despite some solid accelerated Python capabilities that are out there, no Python application has shown anything close to petaflops performance.

It seems that making Julia scale to petaflops performance involves the same thinking as effective parallel programming in any high-performance language.

The Data: SDSS

Irénée du Pont Telescope at Las Campanas Observatory. (credit: Krzysztof Ulaczyk, CC BY-SA 4.0)

In 1998, the Apache Point Observatory in New Mexico began imaging every visible object from over 35 percent of the sky in a project known as the Sloan Digital Sky Survey. Today, data is also collected from the Irénée du Pont Telescope at Las Campanas Observatory in Chile (APOGEE-2S). The Sloan Digital Sky Survey (SDSS) has been one of the most successful surveys in the history of astronomy. After a decade of design and construction, the SDSS began regular survey operations in 2000. It has progressed through several phases, SDSS-I (2000-2005), SDSS-II (2005-2008), SDSS-III (2008-2014), and SDSS-IV (2014+). Each phase has involved multiple surveys with interlocking science goals. This project proudly shares that they have already created the most detailed three-dimensional maps of the Universe ever made, with deep multi-color images of one third of the sky, and spectra for more than three million astronomical objects. The project has released fourteen data versions of their datasets thus far. They continue to release new data sets annually. The dataset scheduled for the end of this year will include spectral data across the face of the nearest ten thousand galaxies, instead of the previous surveys which obtained spectra only at the centers of target galaxies. The SDSS team calls this work “Mapping Nearby Galaxies at APO (MaNGA).” The dataset in 2019 will include information from the Apache Point Observatory Galaxy Evolution Experiment (APOGEE-2) to observe the “archaeological” record embedded in hundreds of thousands of stars to explore the assembly history and evolution of the Milky Way. You could say that the details as to how the Galaxy evolved are preserved today in the motions and chemical compositions of its stars.

It’s not hard to image that these ever-expanding datasets will offer even more opportunities for the Celeste collaboration in their analysis work.

Version 1.0

Prior work focused on non-statistical models. The Celeste collaboration focused on a statistical model, a fully generative model to be precise. Over the course of their first three years, the Celeste collaboration developed a new parallel computing method that was used to process the dataset (about 178 terabytes) and produce the most accurate catalog of 188 million astronomical objects in just 14.6 minutes with state-of-the-art point and uncertainty estimates.

In addition to creating a catalog, an important objective of this work was to identify promising galaxies for spectrograph targeting with the hope of better understanding dark energy and the geometry of the universe.

A key design objective of Celeste is to help be an extensible model and inference procedure for use by the astronomical community. This will allow more computation to be applied selectively if deeper understanding of any particular object is desired (e.g., brightness, rotation). Other applications might include finding supernovas or detecting near-Earth asteroids. The teams see enormous potential in the framework they have built. An hour-long presentation offers many more details of the work of Celeste 1.0 and is available for viewing online.

To help grasp the processing being done, here is a sample (using a synthetic image) of processing being done by an early prototype for Celeste 2.0. The synthetic image (the “input” to an autoencoder) is first, then the recon_mean is the mean of the approximation we find to the “output” of an autoencoder. The fact that it appears the same as the input is exactly what is desired! In Celeste 2.0, the recon_mean is formed by summing the four images to the right – which are the “deblended” images. These four images are hopefully useful to astronomers.

Envisioning Version 2.0

They first reported their petascale results last year, and they’ve been busy since then envisioning and developing “Celeste 2.0.” The collaboration is focused on moving to a more sophisticated inference model to replace the purely graphical model approach of Celeste 1.0, which was quite successful in its own right using only conventional variable inference. A key objective of this work is not only more accurate placement and features, but also more accurate uncertainties (“error bars”) for these as well.

Celeste 2.0 utilizes an autoencoder (variable) with a recurrent neural network (RNN), that also employs bayesian inference, and adds a gravitational lensing capability. The Bayesian inference technique is commonly associated with big data and machine learning projects, and typically  gets sharper predictions from data than other techniques. Bayesian inference effectively aims to inject some common sense (bias based on additional knowledge) into an otherwise sterile statistical analysis. In the case of Celeste 2.0, the newer techniques capture meaning from the vast dataset more accurately.

Bayesian models are composable, meaning that they work well as add-ons. This enables work on using Bayesian models to create a new gravitational lensing capability to undo the distortions which have occurred by the time it reaches a telescope. This is an area of active development, which promises to further refine the catalog of visible objects.

Endless Possibilities

Of course, I’m guessing work will not end with Celeste 2.0. They’ve opened up the challenge of building a catalog of the universe, and like all big data problems it has an insatiable appetite for more data. The continually growing sources of data in the SDSS offers many opportunities for the analysis work of the Celeste collaboration[1]. One day, perhaps gravitational wave data from the newest source of astronomy data can be incorporated? By then, we might also be able to offer them a data feed from a telescope sitting on Mars. It will happen.

In the meantime, the Celeste collaboration continues to make excellent use of the Intel processors in the Cori supercomputer with the Julia language. And this provides a wealth of encouragement for all big data projects looking to scale.

[1] The key contributors to the Celeste collaboration have been: Jeffrey Regier, Bryan Liu and Jon McAuliffeat of UC Berkeley ; Andy Miller and Ryan Adams of Harvard; David Schlegel of LBL Physics; and Prabhat of NERSC.

James Reinders is an HPC enthusiast and author of eight books with more than 30 years of industry experience, including 27 years at Intel Corporation (retired June 2016).

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

IBM Unveils Latest Achievements in AI Hardware

December 13, 2019

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energetic effort,” IBM Research wrote in a blog post. “Therefor Read more…

By Oliver Peckham

Focused on ‘Silicon TAM,’ Intel Puts Gary Patton, Former GlobalFoundries CTO, in Charge of Design Enablement

December 12, 2019

Change within Intel’s upper management – and to its company mission – has continued as a published report has disclosed that chip technology heavyweight Gary Patton, GlobalFoundries’ CTO and R&D SVP as well a Read more…

By Doug Black

Quantum Bits: Rigetti Debuts New Gates, D-Wave Cuts NEC Deal, AWS Jumps into the Quantum Pool

December 12, 2019

There’s been flurry of significant news in the quantum computing world. Yesterday, Rigetti introduced a new family of gates that reduces circuit depth required on some problems and D-Wave struck a deal with NEC to coll Read more…

By John Russell

How Formula 1 Used Cloud HPC to Build the Next Generation of Racing

December 12, 2019

Formula 1, Rob Smedley explained, is maybe the biggest racing spectacle in the world, with five hundred million fans tuning in for every race. Smedley, a chief engineer with Formula 1’s performance engineering and anal Read more…

By Oliver Peckham

RPI Powers Up ‘AiMOS’ AI Supercomputer

December 11, 2019

Designed to push the frontiers of computing chip and systems performance optimized for AI workloads, an 8 petaflops (Linpack) IBM Power9-based supercomputer has been unveiled in upstate New York that will be used by IBM Read more…

By Doug Black

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

GPU Scheduling and Resource Accounting: The Key to an Efficient AI Data Center

[Connect with LSF users and learn new skills in the IBM Spectrum LSF User Community!]

GPUs are the new CPUs

GPUs have become a staple technology in modern HPC and AI data centers. Read more…

At SC19: Developing a Digital Twin

December 11, 2019

In the not too distant future, we can expect to see our skies filled with unmanned aerial vehicles (UAVs) delivering packages, maybe even people, from location to location. In such a world, there will also be a digital twin for each UAV in the fleet: a virtual model that will follow the UAV through its existence, evolving with time. Read more…

By Aaron Dubrow

Focused on ‘Silicon TAM,’ Intel Puts Gary Patton, Former GlobalFoundries CTO, in Charge of Design Enablement

December 12, 2019

Change within Intel’s upper management – and to its company mission – has continued as a published report has disclosed that chip technology heavyweight G Read more…

By Doug Black

Quantum Bits: Rigetti Debuts New Gates, D-Wave Cuts NEC Deal, AWS Jumps into the Quantum Pool

December 12, 2019

There’s been flurry of significant news in the quantum computing world. Yesterday, Rigetti introduced a new family of gates that reduces circuit depth require Read more…

By John Russell

RPI Powers Up ‘AiMOS’ AI Supercomputer

December 11, 2019

Designed to push the frontiers of computing chip and systems performance optimized for AI workloads, an 8 petaflops (Linpack) IBM Power9-based supercomputer has Read more…

By Doug Black

At SC19: Developing a Digital Twin

December 11, 2019

In the not too distant future, we can expect to see our skies filled with unmanned aerial vehicles (UAVs) delivering packages, maybe even people, from location to location. In such a world, there will also be a digital twin for each UAV in the fleet: a virtual model that will follow the UAV through its existence, evolving with time. Read more…

By Aaron Dubrow

Intel’s Jim Clarke on its New Cryo-controller and why Intel isn’t Late to the Quantum Party

December 9, 2019

Intel today introduced the ‘first-of-its-kind’ cryo-controller chip for quantum computing and previewed a cryo-prober tool for characterizing quantum proces Read more…

By John Russell

On the Spack Track @SC19

December 5, 2019

At the annual supercomputing conference, SC19 in Denver, Colorado, there were Spack events each day of the conference. As a reflection of its grassroots heritage, nine sessions were planned by more than a dozen thought leaders from seven organizations, including three U.S. national Department of Energy (DOE) laboratories and Sylabs... Read more…

By Elizabeth Leake

Intel’s New Hyderabad Design Center Targets Exascale Era Technologies

December 3, 2019

Intel's Raja Koduri was in India this week to help launch a new 300,000 square foot design and engineering center in Hyderabad, which will focus on advanced com Read more…

By Tiffany Trader

AWS Debuts 7nm 2nd-Gen Graviton Arm Processor

December 3, 2019

The “x86 Big Bang,” in which market dominance of the venerable Intel CPU has exploded into fragments of processor options suited to varying workloads, has n Read more…

By Doug Black

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
CEJN
CJEN
DDN
DDN
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

Cerebras to Supply DOE with Wafer-Scale AI Supercomputing Technology

September 17, 2019

Cerebras Systems, which debuted its wafer-scale AI silicon at Hot Chips last month, has entered into a multi-year partnership with Argonne National Laboratory and Lawrence Livermore National Laboratory as part of a larger collaboration with the U.S. Department of Energy... Read more…

By Tiffany Trader

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

IBM Opens Quantum Computing Center; Announces 53-Qubit Machine

September 19, 2019

Gauging progress in quantum computing is a tricky thing. IBM yesterday announced the opening of the IBM Quantum Computing Center in New York, with five 20-qubit Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This