Visit additional Tabor Communication Publications
July 27, 2009
New optimizations, enhanced I/O increase speed of GTC by more than 100 percent
When it comes to scientific computing, the amount of science reaped from a simulation is largely determined by the speed and scalability of the software. Likewise, a code's speed is often at the mercy of its I/O performance. The more efficient the I/O, the faster the code and the more simulations can be run over a period of time.
Few codes require faster I/O or scale better than today's fusion particle codes. GTC and XGC-1, for instance, are running on more than 120,000 cores on the National Center for Computational Sciences' (NCCS's) Jaguar Cray XT5 supercomputer, the fastest system in the world for open science with a peak performance of 1.6 petaflops.
"These are the largest runs with the largest datasets," said Scott Klasky of the NCCS and the SciDAC scientific data management center. "And they are at the extreme bleeding edge of scalability and I/O."
Thanks to Klasky and a diverse team of collaborators, GTC recently became twice as fast. This number, said Klasky, was not reached only for an ideal benchmark case but for an actual production simulation. This impressive performance is the result of cross-discipline collaborations that have led to significant software and middleware improvements.
These advances are the result of software enhancements by Cray Inc. and a combined team effort of physicists (Y. Xiao and Z. Lin of the University of California–Irvine and S. Ethier of Princeton Plasma Physics Laboratory), vendors (N. Wichmann of Cray and M. Booth of Sun Microsystems), and computational scientists (S. Hodson, S. Klasky, Q. Liu, and N. Podhorszki of Oak Ridge National Laboratory [ORNL]; H. Abbasi, J. Lofstead, K. Schwan, M. Wolf, and F. Zheng of Georgia Tech; and C. Docan and M. Parashar of Rutgers).
"In order to advance the science, collaboration is essential," said Lin. "High-performance computing is more than benchmark numbers; it is about advancing scientific breakthroughs and that is accomplished by achieving high performance from both the code and the computing system [Jaguar]."
The various technical improvements include a new Cray compiler, optimizations to the code itself, and further I/O enhancements to ADIOS, an I/O middleware package created by Klasky and collaborators at Georgia Tech and Rutgers. From core physicists to programmers to hardware vendors, this group effort cut across organizational and disciplinary lines. "Working with some of the top computational scientists in the world, such as Parashar and Schwan, allows us to bring in new ideas that help enable more science in these codes," said Klasky.
While other members of the collaboration worked on enhancements in their respective areas, the ADIOS team was busy improving the I/O of some of the most scalable codes run at ORNL. In the past, said Klasky, I/O wasn't a major issue simply because simulations had not reached the enormous scales seen on today's most powerful high-performance computing systems. Now, however, fusion simulations generate up to 100 terabytes of data per day.
"Researchers want easy-to-use, fast, scalable, and portable I/O," said Klasky, adding that the team is currently making additional updates to the ADIOS package for improved analysis capabilities. Today's supercomputers can make I/O performance difficult, thus the need for ADIOS, an I/O componentization layer that requires the users to add only a few lines of code to their applications to gain substantial I/O performance.
In part, it works by allowing users to switch between several best-practice I/O methods (ADIOS-BP with MPI I/O, POSIX, parallel HDF5, parallel NetCDF4, or even in situ visualization) without ever having to fundamentally change their codes or recompile. The new version will greatly ease the I/O burden on users, said Klasky.
"We are getting close to the peak performance of the I/O system when it comes to reading data in GTC," he said, adding that the GTC code read 100 terabytes of analysis data per hour on Jaguar when reading from 512 cores in a recent simulation. Previously, fusion codes used multiple different file formats, none of which worked well for both small- and large-scale data streams. This was the motivating factor behind ADIOS, which defines a novel, metadata rich, binary-packed "BP" file format capable of writing out GTC data at 80 gigabytes per second on Jaguar's XT5 component.
"ADIOS implements a new file format that was developed specifically to work well with parallel file systems from the ground up," said Klasky. Rutgers' Parashar agreed. "File formats for today's parallel file systems need to be redesigned to get bleeding-edge performance for both reading and writing," he said. "The old-school view that contiguous file formats are best for I/O is being revisited in the context of parallel file systems and is being challenged by new file formats."
"It's all about the science, and the best way to help the scientists is to work with them as a team to develop new and innovative software," said Klasky. These advancements in GTC will eventually find their way into other codes as well, he said, further allowing researchers to probe the complex properties of Mother Nature and tackle today's greatest scientific challenges.
ADIOS is open source and can be obtained from Scott Klasky at the NCCS (email: firstname.lastname@example.org).
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.