ORNL Closes In On Petascale Computing
As a Department of Energy leadership computing facility, Oak Ridge National Laboratory (ORNL) employs some of some of the most powerful supercomputers on the planet. Buddy Bland, project director of ORNL’s Leadership Computing Facility, discusses the challenges of computing at very large scale — with “peta” around the corner and “exa” on the distant horizon.
HPCwire: Oak Ridge’s “Jaguar” system is now number two on the Top500 list, compared to number ten last November. That’s a big performance leap. How is this helping your users?
Bland: The huge leap in “Jaguar’s” effective computing power is giving scientists the tools they need to solve really big, important problems — scientifically important problems and, through the industrial portion of the DOE INCITE program, economically important problems as well. That’s the whole reason for the leadership computing initiative that Dr. Orbach put forward and that ORNL won in 2004.
Climate scientists are using the system to develop the next generation of the Community Climate System Model (CCSM). Peter Gent of NCAR [National Center for Atmospheric Research], who is chair of the CCSM Scientific Steering Committee, said that the performance of CCSM on Jaguar was “out of our dreams” at a blistering 40 simulated years per day. He said recent improvement to the simulation of the El Niño/Southern Oscillation in CCSM is the most impressive new result in ten years.
Fusion researchers are using “Jaguar” to simulate the multinational ITER fusion reactor, a device that will bring the world closer to a clean, abundant energy source by heating an ionized gas ten times hotter than the sun. The AORSA fusion application has achieved 87.5 teraflops on “Jaguar” for the dominant computational kernel. This is 74 percent of the system’s theoretical peak.
On the industrial side, a team led by Jihui Yang of General Motors is using the system to perform first-principles calculations of thermoelectric materials capable of turning waste heat into electricity. The team’s goal is to help automakers capture that 60 percent of the energy generated by an automobile’s engine that is currently lost through waste heat and to use it to boost fuel economy. These calculations would not have been possible if the scientists had not had access to the leadership computing resources of the Energy Department. This is another great example of how computational simulation can contribute to scientific advances and energy security. There are many more examples.
HPCwire: You serve a relatively small number of users who have really big problems, meaning codes that exploit a large fraction of your systems. What special things do you do to serve these high-end users?
Bland: It takes a lot of personal attention. Computers at the scale of the top five of the Top500 list are so much larger than what most people have ever had access to. To increase the ease of use and productivity, we established our Scientific Computing Group, led by Dr. Ricky Kendall. Members of this group act as liaisons between the computer center and the computational projects. They have Ph.D.’s in relevant scientific disciplines and many years of experience working with high-performance computers. They help users port, tune and optimize their codes. This includes and often requires modification, augmentation or a change of algorithms and implementations. Only a modest number of existing codes have parallelized the I/O, so getting data in and out of the computers can be a serious issue. Providing this kind of expert assistance to each code team and working closely with them is one of the real keys to making these leadership-class machines productive.
Equally important is our User Assistance and Outreach Group, led by Dr. Julia White. This group is intimately familiar with the day-to-day functioning of the machines. Group members help our users fix broken code and ensure the codes are behaving as intended. These two groups and their dedication to delivering successful science for LCF users are especially important because state-of-the-art supercomputers, like all high-performance machines, can be very unforgiving.
HPCwire: There’s a Cray Center of Excellence at ORNL. What role does that play?
Bland: Cray’s John Levesque heads this center, which Cray established in collaboration with ORNL to accomplish several things. The center’s most important function is working closely with the users and with Ricky Kendall’s group to port, tune and optimize the codes, and to understand the algorithms. John Levesque and his colleagues take what they learn from this process back to Cray’s computer designers, who use it to design future-generation computers that can run these problems even faster, along with solving new and different problems. The goal is to create a better mapping between the algorithms and the machines and reduce overall time to solution. Another important contributor is Luis DeRose, Cray’s head of tools who’s also on the staff of the Center of Excellence. He studies how well the Cray tools work and what others tools are needed by our user community. Cray acts as a true partner, not just a manufacturer of the computers.
HPCwire: What are your current plans for getting to a peak petaflop?
Bland: Cray’s code name for the follow-on to the XT series is “Baker.” We have a contract for a “Baker” system, which is expected in late 2008 or early 2009. It will be first in a series of Cray machines based on technology that will go into the “Cascade” system Cray is developing under the DARPA HPCS [High Productivity Computing Systems] program. “Baker” will be a peak petaflops machine.
HPCwire: Rumor has it that ORNL surveyed its user community to identify which codes would be the best candidates for 250 teraflops and 1 petaflops performance. Can you say more about this?
Bland: One important thing in bringing these machines to readiness is that any time a very large machine comes on line, it takes some time from delivery through acceptance. During this time, we run a suite of applications to understand how well the machine is working. But we don’t only want to run problems we already know the answers to. We also need a suite of applications we can use to accomplish new science. We are working with users in the DOE and other agencies through the INCITE process to identify applications that are early candidates for these machines and that have the potential to accomplish groundbreaking science. We are seeing which applications have technical readiness and need access to these large machines. Technical readiness means that the algorithms are likely to work at tens or hundreds of thousands of cores. We have a number of applications today that are exploiting all 23,000 “Jaguar” cores with good scaling. There’s a reasonable chance these will run well on even larger machines.
HPCwire: Will some of these candidate codes come from industry?
Bland: DOE’s INCITE program provides access to leadership-class machines for users from government, academia and industry. Codes from all of these areas will be eligible.
HPCwire: In your opinion, what are the biggest challenges to achieving sustained petaflops performance on real-world applications?
Bland: An incredible amount of parallelism needs to be found and exploited to effectively use sustained petaflops machines with tens or hundreds of thousands of cores. Combining MPI with OpenMP on an SMP machine involves a relatively low level of programming. The real challenge is finding appropriate programming models, such as the HPCS languages or others. Another major problem is going to be fault tolerance at this scale. Some applications will need weeks or more to run. We need a way to generate correct answers even when some components are not fully functional. This is a major research topic today. The DOE is investing a lot in research on fault-tolerant computing.
HPCwire: Your “Jaguar” system is an Opteron-based Cray XT3/XT4, and your “Phoenix” system is a Cray X1E vector machine. How is this hybrid approach working?
Bland: The combination works very well. “Phoenix” is relatively small compared to machines at the top of the Top500, although it’s in the top 100. Its vector processors run certain applications exceptionally well, including some fusion calculations that exploit the memory bandwidth of this machine, and some climate modeling codes. There’s a need to maintain vector machines. Cray’s strategy of integrating many different processor types in a common infrastructure — multithreaded, vector, maybe also special purpose as well as scalar processors — is interesting. We’ll be working closely with Cray on how best to apply hybrid computers like this to scientific applications.
HPCwire: Have you encountered any surprises in running codes or benchmarks at very large scale?
Bland: Running Linpack to get to number two on the Top500 list took approximately 18 hours. The first time we ran it, we got a residual number that was very large. We had seen problems before with running Linpack and suspected a hardware problem. We spent a couple of days trying to diagnose this and found we had broken the Linpack benchmark code. We had exceeded the periodicity of the 32-bit random number generator in Linpack. It wasn’t a big deal for Linpack, in the sense that Jack Dongarra is modifying the sample code to correct this. What happened is more important as a reminder that when you’re dealing with very large calculations, it’s critically important to pay close attention to the mathematical techniques you use, to make sure you end up with mathematical results that are reasonable.
HPCwire: In sum, how is ORNL’s evolution into a leadership-class computing facility working out in practice?
Bland: Very well. The work with the Scientific Computing Group has turned out to be a critical aspect of being able to use these very large machines and allocations. Our partnership with DOE for INCITE has been very effective. Ray Orbach started this at NERSC and it worked well, but the demand for time has always exceeded the supply, so creating the DOE leadership centers and making these machines available to both DOE and non-DOE users has been a very good thing. The quality of the machines is high, and the applications are running well and getting great results.
HPCwire: What’s next for ORNL?
Bland: The DOE Office of Science recently held a series of workshops on exascale computing at ORNL, Argonne and Berkeley. We’re all trying to understand the challenges and the issues. We’re very interested in how to continue this exploration.