Visit additional Tabor Communication Publications
February 22, 2008
With its planned upgrade to a petaflop computer not far off, Oak Ridge National Laboratory (ORNL) surveyed a broad user base to analyze and understand application requirements for these leadership systems. ORNL's Doug Kothe debriefed HPCwire on the findings.
HPCwire: What is your role at ORNL?
Doug Kothe: I'm director of science for the National Center for Computational Sciences (NCCS) at ORNL. My job is to facilitate the applications: porting, optimizing, improving existing algorithms, adding new algorithms, and frankly anything else needed to help our users achieve the best science output possible. It's a great job that keeps me close to the breakthrough research, although in this role I do not have as much time as I used to for writing scientific code myself.
HPCwire: Why did the NCCS undertake this study? What were the goals?
Kothe: The survey's main goal was twofold: first, to elicit and analyze scientific application requirements for current and planned leadership systems out to the petascale; and second, to identify applications that would qualify for early access to ORNL's 250-teraflop and 1-petaflop systems. Identifying user requirements for future-generation HPC systems is part of ORNL's original charter as a DOE Leadership Computing Facility. My job is to implement this process so the NCCS can select the appropriate HPC resources on behalf of the DOE Office of Science and our users.
I chair our Applications Requirements Council, which works with the scientific projects we host to identify the more specific requirements. This council incorporates these requirements in a document we hand off to the NCCS Technology Council. The Applications Requirements Council's role is to provide tactical, year-to-year input that helps the Technology Council take the longer view on technology acquisition and deployment and strategic thinking about next-generation architectures.
HPCwire: What do you mean by "next-generation architectures"?
Kothe: That generally refers to architectures that will be available in the next 1 to 3 years, so they're reasonably well defined. We have an opportunity to influence the generation after the next generation by working with the HPC vendors.
HPCwire: Are you already looking as far ahead as exascale systems?
Kothe: DOE and many of the agencies are already looking at exascale system requirements at a high level. Researchers at the leading edge of scientific discovery are demanding systems with greater and greater capability. What disruptive technologies will we need in order to provide the most effective resources? The next-generation systems after petaflop machines will probably be in the 10- to 30-petaflop peak range. The science, engineering, and national security drivers for these systems, on up to exascale systems, are very compelling.
HPCwire: Who was asked to participate in the application requirements study? Was it limited to ORNL's on-site user base, or did the surveyed group go beyond that?
Kothe: We surveyed our nationwide project base first and foremost. This group represents researchers from DOE's Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. We also reached out to other projects we were aware of that were developing and using large scientific codes, such as those with NSF center allocations. In total, we sent the survey out to over 30 teams of people. That includes 22 INCITE projects and 8 to 9 other projects. A single project could involve a few people or dozens of people.
HPCwire: How did the survey process work?
Kothe: It began in our Applications Requirements Council, which includes a representative from every current INCITE project hosted by the NCCS, along with technical staff members from the NCCS's Scientific Computing Group. There are about 40 of us in the council, and we hold telecons periodically to discuss the process of gathering and understanding requirements. Our requirements-gathering process started with more general queries, but we soon found that the best approach was to ask specific and compelling questions that were fairly direct. For example, in the early access or "pioneering applications" portion of the questionnaire, we asked the scientists to quantify their most compelling and difficult scientific challenges and the results they might expect if they had exclusive short-time, a few weeks or less, access to 250-teraflop and 1-petaflop systems.
HPCwire: How detailed did the survey get?
Kothe: The questions ranged from general items like the scientific impact and science drivers to things as specific as the nature of the applications, the algorithms, their scalability, and other attributes and requirements.
We asked them, for example, "What science problem would you simulate? What are the problem's attributes? What do your algorithms look like now? Are there any issues or challenges you'd need to address to do this simulation?"
HPCwire: Was this an email survey?
Kothe: Yes. We passed around the questionnaire via email. The next step will be to put the requirements survey form online to allow us to continue to gather feedback from an even broader user base.
HPCwire: How many of the applications qualified for early access consideration?
Kothe: We accepted everything for consideration. We didn't eliminate any codes the users submitted because our role here was to collect and analyze the data, as quantitatively as possible, and submit it to our sponsor, the Office of Advanced Scientific Computing Research in the DOE Office of Science, for a final decision. The qualification was the scientists' responses. Everyone who responded was very optimistic about their ability to exploit the platform, but we approached some of the top scientists in the country, so that was not surprising.
HPCwire: Was anything surprising?
Kothe: Yes. There was a lot more commonality in the application algorithms and software implementation than we imagined. Many of these codes use the same math libraries, the same languages and compilers, and so on.
HPCwire: Were many of these applications ones that are being run today on your big systems?
Kothe: Absolutely. A large fraction of the submitted applications, at least 20 of them, are ones we already have experience with. This is of special interest because with this survey we were looking at how to plan just a year or so ahead, when we expect to have systems with these higher performance levels.
HPCwire: Are there many applications that scale well today on your supercomputers?
Kothe: There are probably at least a dozen I'm aware of where a single job can use a large fraction, 50 to 75 percent, of our processors today. We're currently upgrading our Cray XT4 Jaguar system from 119 teraflops to more than 250 teraflops, about 32,000 AMD Opteron cores. A number of codes can use all the cores we can provide -- and use them fairly well: Jobs from any of these can scale up to run on most if not all of the entire system. I'm pleased that so many applications can use our big resources.
HPCwire: Did you also ask the scientists to talk about their computational and technology requirements? Did you encounter a language barrier when you did this?
Kothe: Not exactly, but scientists' understanding of the same terms could be different. We didn't ask detailed questions, say, about the interconnect technology. Even bandwidth, gigabits per second, etc. are not always the best terms for connecting with the scientists. We act as middlemen between the apps and the hardware specs. That's our job.
HPCwire: Were there other important findings?
Kothe: An important conclusion is that we cannot expect application code developers to rewrite codes from scratch to achieve better scaling or parallel performance. Large-scale codes can easily have useful lifetimes of 20 to 50 years, with the first 5 to 10 years and even more person-years of effort often needed just to reach code maturity. We must work with code developers to help them refactor their existing code base to boost performance. While this is likely the preferred approach on petascale systems, application developers may have to engage in substantial rewrites of their codes given what we see coming at the exascale.
HPCwire: You asked what additional fidelity, in terms of the physical models and numerical algorithms, people expected for their codes on a 1-petaflop system compared with a 25-teraflop system today. Can you cite some examples?
Kothe: Sure. For the CHIMERA astrophysics code, the expectation is to increase the number of variables from 63 today to more than 1,000. With the LAMMPS biology code, today the users are modeling the dynamics of 700,000-atom systems for 5 to 10 nanoseconds of model time per day of simulation time. With a petaflop system, users hope to increase to modeling multimillion-atom systems for 0.1 to 1.0 microsecond per day of simulation time. Using the CCSM climate model, users could add 100 species of tropospheric chemistry, dynamic vegetation, terrestrial carbon pools, the full sulphur cycle, and many other elements. And with the fusion codes like GYRO and GTC, quantitative ITER performance predictions could start to become a reality. Users will be able to do more truly predictive simulation. Those are major leaps forward that have important scientific and societal implications.
HPCwire: What did you learn that might help other sites interested in identifying petascale applications? Do you also expect to learn from how they conduct their studies?
Kothe: Well, I'm no longer surprised at why, to my knowledge, this hasn't been done in detail before. A survey like this is hard. It requires creating a two-way street between the systems and the applications. As I said earlier, we also learned there is more commonality than we expected across all the sciences in the way most of the codes are instantiated, the mathematical middleware that everyone needs, and other ways.
HPCwire: Do you view this study as an end product or a beginning? Will you do another study on this topic later on?
Kothe: We've committed to doing some variation of this survey and its associated analysis every year. Some years it might be just an update. We view the survey as an evolving requirements form that will be refined over time. It's a far-from-perfect process, and we welcome ideas. We want to make it more quantitative to get more actionable answers. The report generated from the survey is fairly detailed. It's over 100 pages long. The document can be seen on the NCCS's Web site at http://www.nccs.gov/media-center/nccs-reports/.
HPCwire: Was there anything important that we missed?
Kothe: Just that the HPC community has an opportunity to come together to maximize the science output of tomorrow's hardware systems. We all have good ideas, and we can share them and come together on this. This has to do with the interface between the hardware, the system software and applications, and the scientists using the applications. HPC centers will collaborate to optimally map the applications to the platform and continue to work with the researchers and vendors to ensure that the science demands of today and the future are met with leadership computing resources.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.