Here is a collection of highlights from this week’s news stream as reported by HPCwire.
Virtual supercomputing moves one step closer
A team of researchers from Northwestern University, Sandia National Labs and the University of New Mexico recently completed the largest study of the virtualization of a parallel supercomputing system. The team successfully virtualized Sandia’s Red Storm supercomputer using a virtual machine monitor called Palacios. They tested up to 4,096 nodes, double the number cited in previous studies.
Peter A. Dinda, associate professor of electrical engineering and computer science at Northwestern’s McCormick School of Engineering and Applied Science, sums up the challenges involved in virtualizing supercomputing resources:
“Virtualizing a parallel supercomputer is particularly challenging because of the need to support extremely low latency, high-bandwidth communication among thousands of virtual machines,” Dinda says. “Supercomputing users and the owners of supercomputers will not tolerate any performance compromises because the machines are so expensive to acquire and maintain, but, on the other hand, they also want access to the benefits of virtualization.”
Because virtualization effectively cuts the connection between the hardware and operating system, it allows researchers to run their programs without having to first tune them to the supercomputer’s specific software/hardware specs. And there are other benefits to virtualization, like being able to share memory and run multiple operating systems. But none of that means much to researchers if running their application on a virtualized system results in a big performance hit. That’s why it’s important to note that overhead on Palacios measured at under 5 percent. Not bad, considering the number of nodes involved.
Overall, the cost, space and power benefits conveyed by virtualization are so attractive that researchers will continue to look for ways to virtualize HPC resources, despite the latency and bandwidth challenges. This week, they just got a little closer to the holy grail of a virtual supercomputer.
Princeton University plans new research computing center
Princeton University announced plans to build a new high-performance computing facility on the Forrestal Campus, about three miles north of the main campus. The new High-Performance Research Computing Center will be located near the Geophysical Fluid Dynamics Lab, where it will serve as the home of TIGRESS, the Terascale Infrastructure for Groundbreaking Research in Engineering and Science Center. The center would also support part of the schoool’s administrative computing capacity.
From the release:
The new facility would have approximately 40,000 square feet and would comprise three functional components: a computing area; an electrical and mechanical support area; and a small office/support area. The two-story building would be about 50 feet high.
TIGRESS is intended to create a well-balanced set of high-performance computing resources to meet the broad computational requirements of the University research community.
According to Curt Hillegas, director of TIGRESS, the university needed to take action to create a new center and relocate TIGRESS because computational demands are exceeding the capacity of current resources. The Forrestal site’s location will enhance university partnerships with nearby sites, the Geophysical Fluid Dynamics Lab and the Princeton Plasma Physics Laboratory.
If all goes as planned, the facility will be operational in 2011 with a three-person support staff. While the facility is expected to serve the university’s needs through at least 2017, the site proposal allows for future expansion, and a second phase of construction could double square footage.