Spider Up and Spinning Connections to All Computing Platforms at ORNL
Spider, the world’s biggest Lustre-based, centerwide file system, has been fully tested to support Oak Ridge National Laboratory’s (ORNL’s) new petascale Cray XT4/XT5 Jaguar supercomputer and is now offering early access to scientists.
An extremely high-performance file system, Spider has 10.7 petabytes of disk space and can move data at more than 240 gigabytes a second. “It is the largest-scale Lustre file system in existence,” said Galen Shipman, Technology Integration Group leader at ORNL’s National Center for Computational Sciences (NCCS). “What makes Spider different [from large file systems at other centers] is that it is the only file system for all our major simulation platforms, both capable of providing peak performance and globally accessible.”
Ultimately, it will connect to all of ORNL’s existing and future supercomputing platforms as well as off-site platforms across the country via GridFTP (a protocol that transports large data files), making data files accessible from any site in the system.
Shipman said Spider has demonstrated stability on the XT5 and XT4 partitions of Jaguar, on Smoky (the center’s development cluster), and on Lens (the center’s visualization and data analysis cluster). “We’ve had all these systems running on the file system concurrently, with over 26,000 compute nodes (clients) mounting the file system and performing I/O [input and output]. It’s the largest demonstration of Lustre scalability in terms of client count ever achieved.”
Shipman said the file system is designed to support the latest incarnation of Jaguar, which is capable of 1.64 quadrillion calculations a second (1.64 petaflops). “When they told us they needed a file system to support it, we could not just pick up the phone and order one,” he said. “No vendor could deliver such a system, so we essentially trail-blazed.”
It was a phased approach. ORNL computer scientists and technicians (David Dillow, Jason Hill, Ross Miller, Sarp Oral, Feiyi Wang, and James Simmons) worked in close collaboration with partners Cray Inc., Data Direct Networks (DDN), Sun Microsystems, and Dell to bring Spider online. Cray provided the expertise to make the file system available on both Jaguar XT4 and Jaguar XT5. DDN provided 48 DDN 9900 storage arrays, Sun provided the Lustre parallel file system software, and Dell provided 192 I/0 servers. The vendors’ collaboration has produced a system which manages 13,000 disks and provides over 240 GB/s of throughput, a file system cluster that rivals the computational capability of many high-performance compute clusters.
The Spider parallel file system is similar to the disk in a conventional laptop — multiplied 13,000 times. A file system cluster sits in front of the storage arrays to manage the system and project a parallel file system to the computing platforms. A large-scale InfiniBand-based system area network connects Spider to each NCCS system, making data on Spider instantly available to them all.
“As new systems are deployed at the NCCS, we just plug them into our system area network; it is really about a backplane of services,” Shipman said. “Once they are plugged into the backplane, they have access to Spider and to HPSS [the center's high-performance storage system] for data archival. Users can access this file system from anywhere in the center. It really decouples data access and storage from individual systems.”
Before Spider each computing platform had its own file system. Once a project ran an application on Jaguar, it then had to move the data to the Lens visualization platform for analysis. Any problem encountered along the way would necessitate that the cumbersome process be repeated. With Spider connected to both Jaguar and Lens, however, this headache is avoided. “You can think of it as eliminating islands of data. Instead of having to multiply file systems all within the NCCS, one for each of our simulation platforms, we have a single file system that is available anywhere. If you are using extremely large data sets on the order of 200 terabytes, it could save you hours and hours.”
“Spider is one of the most important steps the NCCS has taken toward increasing the scientific productivity of our users,” said Bronson Messer, of the Scientific Computing Group and a participant in the “Three-Dimensional Model of SN1987A Frontier” early science project. “Sophisticated users have been asking for this, while new users I have spoken with immediately see the advantages and become very excited.”
Spider will have both scratch space (short-term storage for files involved in simulations, data analysis, etc.) and long-term storage for each user. Shipman said the technology integration team is now working with Sun to prepare for future NCCS platforms with even more daunting requirements.