Visit additional Tabor Communication Publications
February 09, 2011
As the machines in high-performance computing (HPC) centers increase in processing power, they are able to create more realistic simulations to help study and solve problems that affect the lives of many people. Predicting the spread of viruses and the landfall of hurricanes, for instance, are real-world problems that scientists can analyze using statistical algorithms with computers.
When dealing with large datasets, visualization technology becomes an important factor in enabling scientists to analyze their results. As Kelly Gaither, director, data and information analysis at the Texas Advanced Computing Center (TACC) at The University of Texas at Austin, puts it, “Without visualization, we would need to analyze numerically. We would have to make sense of stacks and stacks of zeroes and ones. We have in our brains the most powerful supercomputer that you can get access to, and visualization technology lets us take advantage of that and leverage our automatic real-time pattern matching, feature recognition and visual acuity to make sense of very large datasets.”
Breeding new discoveries
The Texas Advanced Computing Center provides ample opportunities for scientists to crunch numbers and visualize the results in great detail using graphics programs. TACC’s visualization laboratory serves researchers at The University of Texas at Austin and across the nation, and it is increasing its impact on science thanks in part to its partnership with Dell. Two of TACC’s systems, Stallion and Longhorn, are the largest of their kind in the world.
Stallion, a 307-megapixel display of 75 Dell UltraSharp™ 30-inch monitors, provides users with the ability to perform visualizations on a 15’x5’ tiled display.
Powered by Dell XPS 720 tower desktops with Intel Core 2 processors and dual NVIDIA 8800 GTX (G80) graphics cards, the visualization cluster allows users to access over 36 gigabytes of graphics memory, 108 gigabytes of system memory and 100 processing cores. This configuration enables the processing of datasets of a massive scale, and the interactive visualization of substantial geometries. A large, shared file system is available to enable the storage of terascale size datasets.
“The beauty of Stallion is that we’re able to see the imagery at its native resolution,” says Gaither. “And Stallion can be used for a variety of things. Certainly we’re using it to display very large imagery from simulations when we want to see the resolution of the native data, but we also use it for non-scientific purposes. Stallion is the only place in town where an Austin-based photographer is able to see his digital pictures at native resolution.”
For researchers who don’t require such a large display, TACC offers a workstation area dedicated to visualization with Dell Precision 690 workstations connected to large LCD displays, and a collaboration room for video-conferencing and small group meetings.
World-class data analysis
TACC’s most recent venture with Dell is Longhorn, a cluster designed for remote interactive visualization and data analysis. The system consists of 16 Dell PowerEdge R710 nodes with 144Gb of RAM each, and 240 Dell PowerEdge R610 compute nodes, each with two Intel Xeon 5500 series processors and 48 gigabytes of RAM. Storage nodes are Dell PowerEdge R710 servers with Intel Xeon 5500 series processors. Fourteen Dell PowerVault MD1000 direct-attached storage arrays provide a 210 terabyte global file system, managed by the Lustre Parallel File System. A Mellanox InfiniBand quad-data rate (QDR) fabric provides the interconnect.
To accelerate data analysis and make interactive visualization possible, Longhorn uses 128 NVIDIA Quadro Plex S4 units sourced through Dell, each with four NVIDIA FX 5800 graphics processing units (GPUs), 16 gigabytes of graphics memory and two independent graphics busses (one per GPU pair). Compute nodes are each connected to two dedicated NVIDIA FX 5800 GPUs via the Quadro Plex graphics bus.
“Longhorn is the largest graphics accelerated remote interactive visualization cluster in the world,” says Gaither. “We built it with a grant from the National Science Foundation and in cooperation with Dell. Dell helped us design the cluster architecture using a hybrid approach, combining CPU cycles with NVIDIA graphics processing units.”
Traditionally, GPU acceleration in visualization clusters is handled by desktop computers that leverage internal graphics cards. “Dell has a very compact, rack-mountable server footprint in collaboration with NVIDIA that allows us to have rack-mountable nodes while still gaining the benefit of real-time performance for graphics applications,” says Gaither. “You can imagine what the footprint would look like for a visualization cluster this powerful using hundreds of desktop machines—the Dell and NVIDIA solution reduced our space requirements for Longhorn.”
The Intel Xeon 5500 series processors in the Dell PowerEdge servers are helping TACC solve problems faster. “Although visualization and data analysis are often more memory-bound, we’ve run a number of CPU-intensive jobs that have been quite successful,” says Gaither.
Staying on the bleeding edge
Receiving a grant from the National Science Foundation for a visualization resource of this magnitude was a major step for TACC. It was also critically important for visualization and data analysis as scientific fields—a recognition that visualization and data analysis services are a very important part of the scientific process.
TACC was an early adopter of distributed memory HPC clusters, mostly for reasons of scale. Because analyzing larger and larger problems drives up the cost of processing with shared memory machines prohibitively, distributed memory machines are now the preferred alternative for many HPC centers, using cluster configurations of “commodity” machines.
Dell plays an important part in the visualization facilities at TACC by promptly responding to the center’s needs as it provides greater capabilities to scientists at The University of Texas and across the nation. By purchasing servers, storage, visual computing hardware, monitors and networking components through Dell, TACC is able to realize economies of scale, streamline procurement and benefit from Dell’s technology expertise.
“We keep an open collaboration with Dell, as we do with all of our technology partners,” says Gaither. “We communicate with Dell about the kinds of scientific problems that we are interested in, which enables them to respond to our needs.”
Solving real-world problems
Although Longhorn itself does not have a visualization display attached, data can be streamed to Stallion for local viewing or to remote locations to accommodate researchers all over the world. TACC has worked with the National Oceanic and Atmospheric Administration (NOAA) to run simulations to more accurately predict landfall for hurricanes using new techniques with statistical models, and hopes to use Longhorn during hurricane season to perform the visualization in real time.
“Longhorn will allow us to do three-dimensional visualizations as the storm data is being analyzed,” says Gaither.
TACC also met with researchers to help predict how certain diseases might spread geographically. “I think we’ve had an impact on understanding how to prevent the spread of viruses,” says Gaither. Researchers are also using TACC systems to produce 2D and 3D simulations of the impact of BP’s massive Gulf of Mexico oil spill on coastal areas, and plan for the possibility that a hurricane moving through the Gulf might bring some of the oil inland.
The robust information analysis and visualization capabilities the Dell systems provide are drawing more researchers to TACC. “Stallion and Longhorn are the crown jewels of our visualization resources,” Gaither concludes. “Researchers come here to get capabilities that they wouldn’t otherwise have access to, and that in turn breeds new science and new discoveries.”
HPC Visualization, a conversation with TACC's Dr. Kelly Gaither
Visualizing Science at Scale with Longhorn, with TACC's Dr. Kelly Gaither
Learn more about Dell's HPC solutions
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.