Building the Universe Pixel by Pixel

By Kelen Tuttle

August 20, 2014

Recently, the Harvard-Smithsonian Center for Astrophysics unveiled an unprecedented simulation of the universe’s development. Called the Illustris project, the simulation depicts more than 13 billion years of cosmic evolution across a cube of the universe that’s 350-million-light-years on each side. The goal was to view the formation of galaxies and other large-scale structure we see around us today, to test our understanding of what makes up the universe – including dark matter and dark energy – as well as how those components interact. It was a massive undertaking, one that took more than 5 years to complete. But why was it important to conduct such a simulation?

To better understand the science and art of astrophysics visualizations, three experts came together in late July to discuss the processes the ways in which their work benefits both science and the public’s perception of science. The participants:

RALF KAEHLER – is a physicist and computer scientist by training who now runs the visualization facilities at the Kavli Institute for Particle Astrophysics and Cosmology, located at SLAC National Accelerator Laboratory and Stanford University.

STUART LEVY – is a research programmer and member of the National Center for Supercomputing Applications’ Advanced Visualization Lab team, which creates high-resolution data-driven scientific visualizations for public outreach.

DYLAN NELSON – is a graduate student at the Harvard-Smithsonian Center for Astrophysics and a member of the Illustris collaboration, which recently completed a large cosmological simulation of galaxy formation.

The following is an edited transcript of a roundtable discussion. The participants have been provided the opportunity to amend or edit their remarks.

THE KAVLI FOUNDATION: Dylan, youre a member of the Illustris project team, so let’s start with you. Illustris was a massive undertaking, one that took more than 5 years to complete. Why was it important to conduct this simulation?

DYLAN NELSON: This simulation tested our big-picture understanding of the universe’s evolution. We can’t just compare the observation of one galaxy; we need to compare whole populations of thousands or tens of thousands of galaxies, and the simulation lets us do this by creating a big volume of the universe. In visualizing this simulation, we found some unexpected features that in retrospect really shouldn’t have been unexpected at all. For example, when we made a movie showing the temperature of gas in the universe evolving over time, we saw that galaxies had a tendency to flicker rapidly. We traced this back to one of the three ways in which we let supermassive black holes input energy into the galaxies within which they reside. Although we expected that the energy would affect the temperature of the gas, we didn’t know how intermittent it would be, how it would create these flickers and bursts. That’s really something that we were surprised by, and something that made us rework our models a bit. 

TKF: Ralf, what types of insights are gained through the visualizations you create with scientists at the Kavli Institute for Particle Astrophysics and Cosmology? Do you also tend to find unexpected features?

RALF KAEHLER: What I often hear from scientists is that they gain intuition from watching the animations we create, intuition that’s hard to get from just looking at the raw numbers. They see how gas moves, how dark matter clumps on smaller scales and then merges and forms larger and larger clumps of dark matter. And it seems like this intuition is very important for a thorough understanding of the processes.

Another very important advantage visualizations offer is the ability to catch errors in the simulations. By just looking at the numbers, it can be easy to miss these errors. But when watching an animation they can become totally obvious. We can easily see if there’s some discontinuity in the data that shouldn’t be there and then you can investigate further if it’s a feature or an artifact or a bug. The software used to produce these simulations usually consists of hundred thousands or millions of lines of code. Codes of that size often contain bugs and visualizations can help to determine if there’s an error hidden within the code.

TKF: It sounds like visualizations are especially good for identifying issues with your assumptions or the underlying models. Stuart, would you agree with that?

STUART LEVY: I think that’s a really good point, and it’s something that people talk about when they’re thinking of doing a visualization. If you reduce things to a graph with some statistics on it, in choosing what the statistics should measure, you’re saying what the interesting things are. And the hope is that if you can present something visually, you might end up bringing in things that you didn’t expect to bring in.

To me, it also seems like visualizations are becoming more and more useful for looking at very large-scale phenomena. As in observational astrophysics, instead of spending a lot of time looking at modest numbers of individual objects, people are looking at huge numbers of objects.

DYLAN NELSON: I agree with that. Large simulations like Illustris are similar to big observational surveys. When you’re not looking at individual objects, you need sophisticated visualization techniques to pull out the interesting information. For instance, back when the kind of cosmological simulations we do today first started, people plotted a point for each dark matter particle. They learned lots of science from doing that. But these days, when the biggest dark matter simulations include a trillion particles, that’s not going to get you as far. You’re going to need more sophisticated visualization approaches – as well as machine learning techniques or other automated ways of finding interesting trends in the simulation. We’re working on that.

TKF: Even though all three of you create visualizations, your roles and your connections to the scientific questions driving the research are different. How does the process work for each of you? Who comes up with the scientific questions you seek to answer?

RALF KAEHLER:

For us, it’s often an interactive process. We sit together in front of the screen and analyze the data in real-time. I try to design a lot of the algorithms in a way that they produce visualizations pretty quickly, so that we can change parameters like the camera position or the color maps in real time and get an updated image in a fraction of a second. That way, we can explore the data together, focusing on regions of interest, zooming in and out, things like that. Other times, it’s more offline, where I’ll render something overnight and send the result to the scientists and let them have a look at that.

I would also say that while half of my work is for scientists, the other half is for outreach. Sometimes we create visualizations purely for outreach purposes, and sometimes we can use the same visualization for both science and outreach. In the latter case, the scientists first analyze the dataset and then we tweak it a little bit, spending more time with the camera path and the color scheme to make it look a little bit prettier before we use it for planetarium shows.

DYLAN NELSON: The process for me is a little different because my primary responsibility is science. It’s only a secondary responsibility that lets me create visualizations. I always say that when I create visualizations, it’s both for scientific exploration and for dissemination to the public. But I think in reality, in my research group we do those two things in completely different ways.

We need visualizations to understand what is going on in a simulation, to better understand our models and the physical processes we’re simulating. But those visualizations are not pretty; we do them as quickly as possible, and as soon as we have a useful science result, the effort on the visualization stops. On the other hand, when we’re doing a visualization for outreach, that’s really intended to make people say “Oh, wow, that’s really cool!” So there’s a lot more time spent past the point of scientific realization, polishing and making the visualization look visually impressive. 

STUART LEVY: My group really focuses on outreach. So we usually have an idea for a show first, then we’ll go and look for scientists who work in that area and can provide the simulation. They’ll also tell us what we should believe from their simulations and what we shouldn’t believe. Often they’ll be making simulations they know are representing some aspects of reality well and others less well. And so they’ll say something like, don’t pay attention to the temperature here, since we’re not including everything that could be heating things up. We’ll go back and forth both with the scientists and the people producing the show to create something that’s both interesting and scientifically correct.

That said, we do occasionally work with scientists on unanswered questions – though it’s not always in the realm of astrophysics. A few years ago, we were working with a simulation of a tornado. One of the things that the scientists were interested in learning was the origin of tornados. Most severe storms don’t create tornados, so what’s special about the subset of storms that do? They had an idea that we should be looking in the simulation for a feature that’s called a rear flank downdraft – a storm that’s flowing in a certain way. We were looking for this signature in the visualizations and just not finding it. But then one of the graduate students picked out this sort of rolling feature – a horizontal bunch of rolling air – and succeeded in convincing his senior professors that, in this simulation at least, it was that feature and not a rear flank downdraft that triggered the tornado. That was a surprising result, one made possible by the visualization.

TKF: What have been the big breakthroughs in visualization in the past five years? Are there new technologies or revelations that make possible all youve just described? 

STUART LEVY: Bigger disks! It seems a little mundane, but the ability to store huge amounts of data is really important. A couple of years ago, we got about four terabytes of data from a scientist. A few years earlier, that would have been an overwhelming amount, but today we could easily take on several of those. That makes a really big difference. The billion-dollar gaming industry has also been an incredible boon to us. It’s on the back of that industry that high performance graphics cards have been built. Fifteen years ago, the fastest graphics hardware cost the price of a house. In just a few years, that was superseded by hardware that you could get for a couple of thousand dollars. Now it’s come down to a few hundred dollars, and we’re able to use it routinely. If not for the gaming industry, we wouldn’t have all of the graphics processor power that we need.

RALF KAEHLER: I completely agree with Stuart here. The ever-evolving capabilities of graphics hardware are very important for this work. You can now realize interactive visualizations of datasets that were far out of the reach of standard desktop workstations five or ten years ago.

TKF: With all of that computing power, how much of the process is science, and how much of it is art? If the three of you were to visualize the same event, would you end up with similar results?

RALF KAEHLER: I would say that there’s a lot of creativity involved in the process. It might be comparable to taking a photograph of some object. You have all of this freedom of how to choose your camera position, lighting conditions, color filters and so on. Similarly, with the same numerical simulation, you can end up with millions of different images by changing around these variables. So it really depends on the audience you’re targeting, what features in the original dataset you want to highlight, and what story you want to get across. If different people work on visualizations for the same dataset, the results can be totally different.

One of our most recent visualizations was a collaboration with the Hayden Planetarium at the American Museum of Natural History in New York. It shows the role of dark matter in forming the larger structures in the universe. For this audience, we took the term dark matter a bit more literally than usual. If we had done this rendering for scientists, we would have represented higher dark matter densities in brighter colors. But studies have shown that often confuses the general public. So in this visualization, we actually turned it around and rendered the dark matter in darker colors and added some background light. That helped guide the audience and clarified what was dark matter and what was not.

STUART LEVY: I agree. I think we should look at visualization like mapmakers look at map making. A good mapmaker will be deliberate in what gets included in the map, but also in what gets left out. Visualizers think about their audience, as Ralf says, and the specific story they want to tell. And so even with the same audience in mind, you might set up the visualization very differently to tell different stories. For example, for one story you might want to show only what it’s possible for the human eye to see, and in others you might want to show the presence of something that wouldn’t be visible in any sort of radiation at all. That can help to get a point across.

TKF: It sounds like theres quite a bit of room for artistic choice. For outreach purposes, then, why is it important for the visualizations to be based on scientifically accurate data? Why are you creating them, rather than a movie house?

RALF KAEHLER: Using sophisticated numerical simulations ensures that the science is depicted correctly. Besides this, it’s hard to model a lot of the phenomena in astrophysics using the artistic tools that Hollywood movies employ. The phenomena are just too complex to draw by hand. I think more and more of these artistic tools are now starting to incorporate some sort of simplified simulation codes in order to model things like explosions, to make it look more realistic.

TKF: When you’ve created visualizations for a public outlet, have you ever sat in the audience and watched the public’s reaction? What’s that like?

DYLAN NELSON: It’s kind of amazing, to be honest, the amount of press and public interest that’s come out of the Illustris project. Actually, just yesterday I got a call from my father, who had been browsing the news on his phone and saw an image from Illustris on the front page of The New York Times website. This was a still image that we made just for the purposes of putting it on the website, and it’s probably appeared in a dozen newspapers so far. It’s great that there’s so much interest, and that the images are becoming almost iconic.

STUART LEVY: For me, it’s great to watch visualizations in planetarium domes. It’s the most wonderful thing to lie down in the middle of a planetarium – or even in an IMAX theater – and just look up. Having the audience completely surrounded by what they’re seeing can be really breathtaking.

RALF KAEHLER: I love it when visualizations are shown in planetariums, too. It just looks so impressive – much more impressive than looking at the visualizations on a flat monitor in my office. I’ve worked on visualizations that were shown in places like the American Museum of Natural History and the Morrison Planetarium at the California Academy of Sciences. These are great places to reach a lot of people in a nice, inspiring environment. Even though when I’m sitting in the audience it’s too dark to gauge other people’s reactions, sometimes we get emails from people who saw the planetarium shows and write how much they liked it. It’s really motivating, and shows that our time is being well invested.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

AWS Embraces FPGAs, ‘Elastic’ GPUs

December 2, 2016

A new instance type rolled out this week by Amazon Web Services is based on customizable field programmable gate arrays that promise to strike a balance between performance and cost as emerging workloads create requirements often unmet by general-purpose processors. Read more…

By George Leopold

AWS Launches Massive 100 Petaflops ‘Sneakernet’

December 1, 2016

Amazon Web Services now offers a way to move data into its cloud by the truckload. Read more…

By Tiffany Trader

Weekly Twitter Roundup (Dec. 1, 2016)

December 1, 2016

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

HPC Career Notes (Dec. 2016)

December 1, 2016

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high performance computing community. Read more…

By Thomas Ayres

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

IBM and NSF Computing Pioneer Erich Bloch Dies at 91

November 30, 2016

Erich Bloch, a computational pioneer whose competitive zeal and commercial bent helped transform the National Science Foundation while he was its director, died last Friday at age 91. Bloch was a productive force to be reckoned. During his long stint at IBM prior to joining NSF Bloch spearheaded development of the “Stretch” supercomputer and IBM’s phenomenally successful System/360. Read more…

By John Russell

Pioneering Programmers Awarded Presidential Medal of Freedom

November 30, 2016

In an awards ceremony on November 22, President Barack Obama recognized 21 recipients with the Presidential Medal of Freedom, the Nation’s highest civilian honor. Read more…

By Tiffany Trader

Seagate-led SAGE Project Delivers Update on Exascale Goals

November 29, 2016

Roughly a year and a half after its launch, the SAGE exascale storage project led by Seagate has delivered a substantive interim report – Data Storage for Extreme Scale. Read more…

By John Russell

AWS Launches Massive 100 Petaflops ‘Sneakernet’

December 1, 2016

Amazon Web Services now offers a way to move data into its cloud by the truckload. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Seagate-led SAGE Project Delivers Update on Exascale Goals

November 29, 2016

Roughly a year and a half after its launch, the SAGE exascale storage project led by Seagate has delivered a substantive interim report – Data Storage for Extreme Scale. Read more…

By John Russell

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

HPE-SGI to Tackle Exascale and Enterprise Targets

November 22, 2016

At first blush, and maybe second blush too, Hewlett Packard Enterprise’s (HPE) purchase of SGI seems like an unambiguous win-win. SGI’s advanced shared memory technology, its popular UV product line (Hanna), deep vertical market expertise, and services-led go-to-market capability all give HPE a leg up in its drive to remake itself. Bear in mind HPE came into existence just a year ago with the split of Hewlett-Packard. The computer landscape, including HPC, is shifting with still unclear consequences. One wonders who’s next on the deal block following Dell’s recent merger with EMC. Read more…

By John Russell

Intel Details AI Hardware Strategy for Post-GPU Age

November 21, 2016

Last week at SC16, Intel revealed its product roadmap for embedding its processors with key capabilities and attributes needed to take artificial intelligence (AI) to the next level. Read more…

By Alex Woodie

SC Says Farewell to Salt Lake City, See You in Denver

November 18, 2016

After an intense four-day flurry of activity (and a cold snap that brought some actual snow flurries), the SC16 show floor closed yesterday (Thursday) and the always-extensive technical program wound down today. Read more…

By Tiffany Trader

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

Why 2016 Is the Most Important Year in HPC in Over Two Decades

August 23, 2016

In 1994, two NASA employees connected 16 commodity workstations together using a standard Ethernet LAN and installed open-source message passing software that allowed their number-crunching scientific application to run on the whole “cluster” of machines as if it were a single entity. Read more…

By Vincent Natoli, Stone Ridge Technology

IBM Advances Against x86 with Power9

August 30, 2016

After offering OpenPower Summit attendees a limited preview in April, IBM is unveiling further details of its next-gen CPU, Power9, which the tech mainstay is counting on to regain market share ceded to rival Intel. Read more…

By Tiffany Trader

AWS Beats Azure to K80 General Availability

September 30, 2016

Amazon Web Services has seeded its cloud with Nvidia Tesla K80 GPUs to meet the growing demand for accelerated computing across an increasingly-diverse range of workloads. The P2 instance family is a welcome addition for compute- and data-focused users who were growing frustrated with the performance limitations of Amazon's G2 instances, which are backed by three-year-old Nvidia GRID K520 graphics cards. Read more…

By Tiffany Trader

Think Fast – Is Neuromorphic Computing Set to Leap Forward?

August 15, 2016

Steadily advancing neuromorphic computing technology has created high expectations for this fundamentally different approach to computing. Read more…

By John Russell

The Exascale Computing Project Awards $39.8M to 22 Projects

September 7, 2016

The Department of Energy’s Exascale Computing Project (ECP) hit an important milestone today with the announcement of its first round of funding, moving the nation closer to its goal of reaching capable exascale computing by 2023. Read more…

By Tiffany Trader

HPE Gobbles SGI for Larger Slice of $11B HPC Pie

August 11, 2016

Hewlett Packard Enterprise (HPE) announced today that it will acquire rival HPC server maker SGI for $7.75 per share, or about $275 million, inclusive of cash and debt. The deal ends the seven-year reprieve that kept the SGI banner flying after Rackable Systems purchased the bankrupt Silicon Graphics Inc. for $25 million in 2009 and assumed the SGI brand. Bringing SGI into its fold bolsters HPE's high-performance computing and data analytics capabilities and expands its position... Read more…

By Tiffany Trader

ARM Unveils Scalable Vector Extension for HPC at Hot Chips

August 22, 2016

ARM and Fujitsu today announced a scalable vector extension (SVE) to the ARMv8-A architecture intended to enhance ARM capabilities in HPC workloads. Fujitsu is the lead silicon partner in the effort (so far) and will use ARM with SVE technology in its post K computer, Japan’s next flagship supercomputer planned for the 2020 timeframe. This is an important incremental step for ARM, which seeks to push more aggressively into mainstream and HPC server markets. Read more…

By John Russell

IBM Debuts Power8 Chip with NVLink and Three New Systems

September 8, 2016

Not long after revealing more details about its next-gen Power9 chip due in 2017, IBM today rolled out three new Power8-based Linux servers and a new version of its Power8 chip featuring Nvidia’s NVLink interconnect. Read more…

By John Russell

Leading Solution Providers

Vectors: How the Old Became New Again in Supercomputing

September 26, 2016

Vector instructions, once a powerful performance innovation of supercomputing in the 1970s and 1980s became an obsolete technology in the 1990s. But like the mythical phoenix bird, vector instructions have arisen from the ashes. Here is the history of a technology that went from new to old then back to new. Read more…

By Lynd Stringer

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Intel Launches Silicon Photonics Chip, Previews Next-Gen Phi for AI

August 18, 2016

At the Intel Developer Forum, held in San Francisco this week, Intel Senior Vice President and General Manager Diane Bryant announced the launch of Intel's Silicon Photonics product line and teased a brand-new Phi product, codenamed "Knights Mill," aimed at machine learning workloads. Read more…

By Tiffany Trader

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Beyond von Neumann, Neuromorphic Computing Steadily Advances

March 21, 2016

Neuromorphic computing – brain inspired computing – has long been a tantalizing goal. The human brain does with around 20 watts what supercomputers do with megawatts. And power consumption isn’t the only difference. Fundamentally, brains ‘think differently’ than the von Neumann architecture-based computers. While neuromorphic computing progress has been intriguing, it has still not proven very practical. Read more…

By John Russell

Dell EMC Engineers Strategy to Democratize HPC

September 29, 2016

The freshly minted Dell EMC division of Dell Technologies is on a mission to take HPC mainstream with a strategy that hinges on engineered solutions, beginning with a focus on three industry verticals: manufacturing, research and life sciences. "Unlike traditional HPC where everybody bought parts, assembled parts and ran the workloads and did iterative engineering, we want folks to focus on time to innovation and let us worry about the infrastructure," said Jim Ganthier, senior vice president, validated solutions organization at Dell EMC Converged Platforms Solution Division. Read more…

By Tiffany Trader

Container App ‘Singularity’ Eases Scientific Computing

October 20, 2016

HPC container platform Singularity is just six months out from its 1.0 release but already is making inroads across the HPC research landscape. It's in use at Lawrence Berkeley National Laboratory (LBNL), where Singularity founder Gregory Kurtzer has worked in the High Performance Computing Services (HPCS) group for 16 years. Read more…

By Tiffany Trader

Micron, Intel Prepare to Launch 3D XPoint Memory

August 16, 2016

Micron Technology used last week’s Flash Memory Summit to roll out its new line of 3D XPoint memory technology jointly developed with Intel while demonstrating the technology in solid-state drives. Micron claimed its Quantx line delivers PCI Express (PCIe) SSD performance with read latencies at less than 10 microseconds and writes at less than 20 microseconds. Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Share This