Visit additional Tabor Communication Publications
November 03, 2010
Nov. 3 -- As scientists and engineers work to make NASA's James Webb Space Telescope a reality, they find themselves wondering what new sights the largest space-based observatory ever constructed will reveal. With Webb, astronomers aim to catch planets in the making and identify the universe's first stars and galaxies, yet these are things no telescope -- not even Hubble -- has ever shown them before.
"It's an interesting problem," said Jonathan Gardner, the project's deputy senior project scientist at NASA's Goddard Space Flight Center in Greenbelt, Md. "How do we communicate the great scientific promise of the James Webb Space Telescope when we've never seen what it can show us?"
So the project turned to Donna Cox, who directs the Advanced Visualization Laboratory (AVL) at the National Center for Supercomputing Applications (NCSA). Located at the University of Illinois in Urbana-Champaign, NCSA provides enormous computing resources to researchers trying to simulate natural processes at the largest and smallest scales, from the evolution of the entire universe to the movement of protein molecules through cell walls.
Cox and her AVL team developed custom tools that can transform a model's vast collection of ones and zeroes into an incredible journey of exploration. "We take the actual data scientists have computed for their research and translate them into state-of-the-art cinematic experiences," she said.
Armed with an ultra-high-resolution 3D display and custom software, the AVL team choreographs complex real-time flights through hundreds of gigabytes of data. The results of this work have been featured in planetariums, IMAX theaters and TV documentaries. "Theorists are the only scientists who have ventured where Webb plans to go, and they did it through complex computer models that use the best understanding of the underlying physics we have today," Cox said. "Our challenge is to make these data visually understandable -- and reveal their inherent beauty."
The new visualizations reflect the broad science themes astronomers will address with Webb. Among them: How did the earliest galaxies interact and evolve to create the present-day universe? How do stars and planets form?
"When we look at the largest scales, we see galaxies packed into clusters and clusters of galaxies packed into superclusters, but we know the universe didn't start out this way," Gardner said. Studies of the cosmic microwave background -- the remnants of light emitted when the universe was just 380,000 years old -- show that the clumpy cosmic structure we see developed much later on. Yet the farthest galaxies studied are already more than 500 million years old.
"Webb will show us what happened in between," Gardner added.
Cox and her AVL team visualized this epoch of cosmic construction from a simulation developed by Renyue Cen and Jeremiah Ostriker at Princeton University in New Jersey. It opens when the universe was 20 million years old and continues to the present-day, when the universe is 13.7 billion years old.
AVL team members Robert Patterson, Stuart Levy, Matthew Hall, Alex Betts and A. J. Christensen visualized how stars, gas, dark matter and colliding galaxies created clusters and superclusters of galaxies. Driven by the gravitational effect of dark matter, these structures connect into enormous crisscrossing filaments that extend over vast distances, forming what astronomers call the "cosmic web."
"We worked with nine scientists at five universities to visualize terabytes of computed data in order to take the viewer on a visual tour from the cosmic web, to smaller scales of colliding galaxies, to deep inside a turbulent nebula where stars and disks form solar systems like our own," Cox said. "These visuals represent current theories that scientists will soon re-examine through the eyes of Webb."
Closer to home, Webb will peer more deeply than ever before into the dense, cold, dusty clouds where stars and planets are born. Using data from models created by Aaron Boley at the University of Florida in Gainesville and Alexei Kritsuk and Michael Norman at the University of California, San Diego, the AVL team visualized the evolution of protoplanetary disks over tens of thousands of years.
Dense clumps develop far out in a disk's fringes, and if these clumps survive they may become gas giant planets or substellar objects called brown dwarfs. The precise outcome depends on the detailed makeup of the disk. "Dr. Boley was interested in what happened in the disk and did not include the central star," Cox said, "so to produce a realistic view we worked with him to add a young star."
This is astrophysics with a pinch of Hollywood sensibility, work at the crossroads of science and art. "The theoretical digital studies that form the basis of our work are so advanced that cinematic visualization is the most effective way to share them with the public," Cox said. "It's the art of visualizing science."
"What AVL has done for the Webb project is truly amazing and inspiring," Gardner noted. "It really whets our appetites for the science we'll be doing when the telescope begins work a few years from now."
Source: NASA's Goddard Space Flight Center
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 23, 2013 |
he study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.