Software Tools Will Need Refresh for ORNL’s Titan Supercomputer

By Eric Gedenk

November 21, 2011

Application tools critical as programs move toward exascale computing

Oak Ridge National Laboratory’s (ORNL’s) National Center for Computational Sciences is a Department of Energy (DOE) supercomputing center that houses the Oak Ridge Leadership Computing Facility (OLCF) and the Jaguar supercomputer, a Cray XT5 capable of more than 2.3 petaflops, or 2.3 quadrillion calculations per second. In 2012 the OLCF will begin to upgrade Jaguar, boosting its computational ability by up to ten-fold. The upgrade will result in Jaguar’s becoming Titan, a computer capable of 10 to 20 petaflops and the center’s next premier resource for scientific computing. Titan’s arrival will bring fundamental changes to the OLCF’s supercomputing operation, primarily due to the incorporation of hybrid computing architectures that feature both central and graphics processing units (CPUs and GPUs).

Richard Graham heads the OLCF’s Application Performance Tools Group, which identifies software tools and missing tool capabilities to help science and engineering researchers improve the performance of applications that run on leadership-class computers. The group’s focus is on four main tools: compilers, which transform software languages such as Fortran into instructions that a computer understands; debuggers, which help identify errors in users’ source codes; performance analysis tools, which help us understand the  performance characteristics of applications; and communication libraries, which direct communications between different computation nodes in a computer.

Prior to joining ORNL, Graham served as the acting group leader of Los Alamos National Laboratory’s Advanced Computing Laboratory and cofounded its Open MPI project, striving to universalize message passing interface software across multiple platforms. Graham also worked for Cray Research and SGI.

In this interview Graham discusses the challenges presented by new hybrid computer architectures such as Titan’s. His group’s goal is to make sure that the OLCF is prepared to offer researchers the most up-to-date and efficient tools possible to make effective use of a new high-performance computing (HPC) environment.

HPCwire: How do you assemble the tools to shift computing architectures?

Richard Graham: First of all we need to determine the hardware characteristics because those ultimately determine what can be done and what the software can potentially do. That’s the first step, understanding the new hardware and how it’s different from previous hardware. Then we decide which tools—pieces of software that enable application scientists to do the work they want to carry out—are of interest to us, and whether these current tools are sufficient. And by sufficient I mean in the context of our production environment, Jaguar. If they are not, we need to understand if we can enhance the current tool set, or if we need to go out and see if there is something else out there. If there isn’t, we obviously need to figure out how to fill the gap. The first thing we try to do is decide if there is a starting point we can use, and if there is not, then we need to create one. That involves talking with vendors and universities, understanding their plans, and understanding what they currently have that could be of use to us.

In terms of computational characteristics, you need to have a lot of vector-like, available parallelism in your application and be able to do a lot of the same computations in parallel, so any code that has nice loops and a lack of data dependency between the loops is very well suited for hybrid architectures. You also need to understand how to map the parallelism to the underlying hardware. The big issue, though, is that moving the data from the CPU to the GPU takes a long time. Ideally you want to keep the GPU occupied [with large computation] to hide the cost of data transfer between the GPU and CPU. So you either keep data permanently on the GPU so you don’t have to transfer a lot of it, or else you have to have a sufficient amount of work to be able to keep the GPU busy and hide the cost of moving data onto the GPU. New data or work decomposition schemes need to be explored.

HPCwire: How will Titan’s architecture change the way supercomputers operate? 

Graham: The big difference is that we have two very different computing capabilities on a single node comprised of an AMD CPU and an NVIDIA GPU. The CPU is for general-purpose calculations. We’re very familiar with CPUs in the sense that we know how to analyze what happens on them to a certain degree. Then you have a very different accelerator in the GPU that has the potential for very high performance but has less capability than the general-purpose CPU in the sort of operations it can perform. The GPU schedules operations in a certain way and tries to position itself to be able to run at parallel effectively. So the challenge is how to use both the CPU and GPU efficiently in a general-purpose computing environment. 

From a tools perspective, the major difference is that there is a lot less support for GPU tools than there is for CPUs in the HPC environment, and even fewer that target both. This is because few tools have been ported to the GPU environment, and less detailed information is made available by the GPUs. There is “knitting” to the way applications tend to use these things; they use CPUs with GPUs as accelerators to do certain portions of the work, so the data from the two types of processors needs to be merged if you’re trying to look at overall utilization and get an overall picture of how the application is running.

HPCwire: What contributed to petascale success, and what is shaping the push to exascale?

Graham: I think the major tool that contributed to petascale success was having good programming models, languages, libraries, and optimizing compilers, which take an abstract programming language and turn it into a set of instructions for computers to understand. If you’re looking at it from a tools perspective, performance analysis tools are also needed, but without compilers we couldn’t run the codes as we are now. Debuggers are important, but up until a year ago, debuggers did not run at scale. That’s actually one of the achievements we’ve had on a project at the OLCF. With one of our partners, Allinea, we’ve really changed the debugging paradigm by scaling up a debugger called DDT. We’ve been able to basically debug at full scale on Jaguar, even though three years ago people claimed you couldn’t do parallel debugging beyond several hundred to several thousand processes. Now, my group routinely debugs parallel code at over 100,000 processes using DDT. It’s much more effective than trying to use the old techniques. No other debugger can even come close to DDT’s performance, so obviously it’s a hit with users.

As part of the OLCF 3 project, we’ve been working with different software vendors. One, CAPS Enterprise, is a compiler company out of France that produces the HMPP [hybrid multicore parallel programming] compiler, which targets GPUs. We’ve been working with them for two years now enhancing their compiler to meet our needs, and we’ve been very pleased with the partnership. The work has resulted in significant capabilities being added to the compiler that help us incrementally transition our existing applications to accelerator-based computers and has led to some nice performance enhancements. It is one of several compilers that we will support on the system. We have also been working on scalability.

As computer architectures get bigger, scalability becomes an issue. Another critical piece is the Vampir Suite of performance analysis tools. Those tools are coming out of the Technical University of Dresden, and they perform what is called trace-based performance analysis, which collects performance data in the context of the call stack, not only the program counter. The emphasis is on adding capabilities to simultaneously collect data from CPUs and GPUs, but they are also doing a lot of scalability work. They recently decided to work with Terry Jones, a member of my group, who helped in the context of another DOE-funded project to transfer data from hosts to collectors. Basically they’ve been able to run trace-based applications at 200,000 processes. The previous record was on the order of 100,000, and it was very slow.

Before this effort people didn’t really consider doing this type of analysis beyond maybe several thousand processes, so there has been a significant advance in capabilities. This group continues to work on making data collection more practical. We have also emphasized the integration among the different tools of the programming environment.

A common trait behind all of these three collaborations is that we went with companies that already had an existing product, so we weren’t starting from scratch. And because they were existing companies, the second thing is they already had a support infrastructure in place. The third thing is that they were very willing to include enhancements for our needs in their products, so we were really funding to improve their main product line, which is also very beneficial to us.

HPCwire: How will you help get users up to scale with Titan?

Graham: I think the first problem is in current compilers and runtime environments—things that allow users access to the system. Right now most of these components are very primitive, and for a lot of the codes, there’s a lot of code restructuring you have to do manually. The real need is for a set of compiler-based code-transformation tools that will simplify the process and automate as many of the transformations as possible. But before we get there, a big issue is the lack of widely accepted programming models to make this possible. There are some standardization efforts under way, but they’re far from completion, and we will have to see how users take to them. There are parallel languages that people can use, but they’re not widely used. Chapel is one that people keep pointing to, as it was developed in the context of high-performance computing. You also have Fortran trying to bring in some versions that could help to a certain degree.

Historically these shifts in computing are nothing new. This is the way it’s been for a long time, but I can remember the transition from vector processing to the sort of computing that we do now—parallel processing on microprocessors. It has taken about 10 years to make that transition, and by that I mean for a large body of code to run well. So it may take another 10 years for microprocessor-based architectures to fully transform into some sort of heterogeneous multicore computer system. It is not going to be pleasant. It’s going to be very expensive, and they really need something to help in that process. 

Thankfully there is a research community out there that is interested in looking at these sorts of problems. People have been thinking about these types of issues, and so there’s definitely a demand to overcome the obstacles. This is not something that will be done overnight, and it is not just a technical challenge. You also have to have application developers use what is being produced. I’m sure there are different views on how to go about this. I think there are good ideas out there. It’s just an issue of people having the time to do something with the ideas and then have those ideas become things that a commercial company is willing to support, because without that, there is just another set of nice ideas that never influences the community.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Quantinuum Reports 99.9% 2-Qubit Gate Fidelity, Caps Eventful 2 Months

April 16, 2024

March and April have been good months for Quantinuum, which today released a blog announcing the ion trap quantum computer specialist has achieved a 99.9% (three nines) two-qubit gate fidelity on its H1 system. The lates Read more…

Mystery Solved: Intel’s Former HPC Chief Now Running Software Engineering Group 

April 15, 2024

Last year, Jeff McVeigh, Intel's readily available leader of the high-performance computing group, suddenly went silent, with no interviews granted or appearances at press conferences.  It led to questions -- what's Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Institute for Human-Centered AI (HAI) put out a yearly report to t Read more…

Crossing the Quantum Threshold: The Path to 10,000 Qubits

April 15, 2024

Editor’s Note: Why do qubit count and quality matter? What’s the difference between physical qubits and logical qubits? Quantum computer vendors toss these terms and numbers around as indicators of the strengths of t Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips are available off the shelf, a concern raised at many recent Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announced its second fund targeting €200 million. The very idea th Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Computational Chemistry Needs To Be Sustainable, Too

April 8, 2024

A diverse group of computational chemists is encouraging the research community to embrace a sustainable software ecosystem. That's the message behind a recent Read more…

Hyperion Research: Eleven HPC Predictions for 2024

April 4, 2024

HPCwire is happy to announce a new series with Hyperion Research  - a fact-based market research firm focusing on the HPC market. In addition to providing mark Read more…

Google Making Major Changes in AI Operations to Pull in Cash from Gemini

April 4, 2024

Over the last week, Google has made some under-the-radar changes, including appointing a new leader for AI development, which suggests the company is taking its Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

Leading Solution Providers

Contributors

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire