“HPC Matters!” was the big, bold title of a talk by Piyush Mehrotra, division chief of NASA’s Advanced Supercomputing (NAS) Division at its Ames Research Center, during the meeting of the HPC Advisory Council at Stanford last week. At the meeting, Mehrotra offered a glimpse into the state of supercomputing at NASA—and how its systems are being applied.
“The NASA Ames supercomputing facility that is located just down the road from you guys there at Stanford is NASA’s premiere supercomputer center,” Mehrotra said. “There’s another one at NASA Goddard [in Maryland] which is slightly smaller in size, focused more on the earth sciences.”
“Our charter is to support all of NASA’s missions,” he continued. “At any point, we have about 1,500, 1,600 users, with about 600 projects—science and engineering projects which are spread across all the four mission directives that NASA supports on the science side.”
NASA’s HPC operations
The center, he explained, has three main systems: Pleiades, launched in 2008, delivers 5.95 Linpack petaflops (81st on the Top500); Electra, launched in 2016, delivers 5.44 Linpack petaflops; and Aitken, launched in 2019, delivers 13.1 aggregate peak petaflops following a recent expansion.
“Some of you may have used this—Pleiades was our main system until recently,” he said. “Aitken … has taken over as our biggest system as of, actually, last week.” He said the system now rates at 6.39 Linpack petaflops. Prior to the expansion, Aitken placed 84th on the Top500 list with 5.80 Linpack petaflops.
The newer systems—Electra and Aitken—are housed in a modular supercomputing facility, which Mehrotra said saves on time and capital expenditure when installing new systems.
“We can [also] leverage California’s great weather … to be much more eco-friendly,” he said. “So instead of having big huge chillers and air conditioning units to cool Pleiades in the building, [for] Electra and Aitken, what we have been doing is using outside air 90 percent of the time to cool the systems. When it becomes too hot—above 85 degrees, like it was yesterday—then we have fiber curtains where water flows through it, and as the water flows through it … that cools down the air, which then cools the systems.”
(“Aitken,” he said, “has a similar kind of facility, but it’s slightly different, with a closed water loop which cools the actual chips.”)
Mehrotra explained that Electra was the pilot for this modular approach, but that NASA now has a one-acre pad in place for modules—the first of which is already completely full, with a second on the way.
The facility’s supercomputers are supplemented by 50-60 petabytes of primary storage with about an exabyte of tape storage. They are also complemented by NASA’s massive, 128-screen visualization system, called the Hyperwall, which is used for displaying NASA’s simulations.
“And of course we have some amount of GPUs [and] other smaller systems,” Mehrotra said. “GPUs are not a big part of my environment because we haven’t seen a lot of usage of GPUs at this point for traditional scientific applications. The use for AI and ML is increasing, and as it increases we’ll add more GPUs, too.”
A constellation of science in support of NASA’s strategy
“NASA’s strategic plan is organized around four themes, with the vision of exploring the secrets of the universe for the benefit of all,” Mehrotra explained, outlining each: “discover,” “explore,” “develop … the technologies of tomorrow,” and “enable … the capabilities, workforce and facilities that allow NASA to achieve its mission.”
“High-performance computing actually lands in this ‘enabling technologies’ piece,” he said, but affects the others as well. To illustrate the impact of HPC at NASA, Mehrotra highlighted a series of projects run on NASA’s supercomputers.
“They have been simulating the launch environment to help design a new one,” he said. “The idea here is that the launch environment that has been traditionally used seems to be inadequate for the new rockets, which are much larger, and so generate much higher temperatures. … So this simulation is a computational fluid dynamics simulation that is done to assist the designers.”
“Another project is the space launch system itself,” he continued. “They are simulating what happens as a solid rocket booster separates. So there are little rocket engines that are initiated to separate the rocket booster from the core stage, and as you can see, that causes turbulence as it is going and affects the boosters as they’re separating out.” The danger, he explained, was that the boosters could actually tumble back and collide with the core stage, causing a midair disaster—so the researchers want to analyze how the boosters move through the air after separation.
“This is … about 5,000 cores being used for a couple of weeks for each of these simulations,” Mehrotra said.
Another team—in collaboration with NASA’s Jet Propulsion Laboratory (JPL) and MIT, among other agencies—is working to simulate the global state of the ocean.
“What they are doing is they use observational data as input at the start, and they have a grid on the surface of the Earth—250 million grid points—where they calculate the pressure, the velocity and the salinity of the water,” Mehrotra said, explaining that the simulation goes 90 grid levels—totaling some five kilometers—beneath the surface of the ocean. The goal: to figure out how the ocean is changing over time. “This is one of the largest applications that has been done on our system—about 9,000 cores—and has produced about five and a half petabytes of data,” he added.
Mehrotra also touched on rapid-response computing, citing a use case of the NASA Space Launch System (SLS) rocket in a wind tunnel painted with pressure-sensitive paint. “The traditional way was to run these experiments in the wind tunnel and then dump the data, that then goes back to the engineers and takes a few months with the engineers,” he said. “So what we did was we connected the wind tunnel directly to the HPC system and the analysis that took 24 hours on a workstation, we sped it up so that each set of frames that came through was analyzed on the machine in … five minutes or so. And so this data was available to the experimentalists almost immediately.”
As opposed to simulation activities, AI and machine learning at NASA remain fairly nascent. “NASA is in some sense late to the game [on AI and ML],” Mehrotra said. “But in the last 18 months, there’s been an explosion of projects using machine learning [and] deep learning[.]” These projects, he said, span feature detection projects (like identifying exoplanets and trees in imagery), prediction projects (like space weather prediction), anomaly detection (like aviation safety and systems behavior) and more.
Not quite everything is rosy on the HPC front at NASA, though. Mehrotra explained that much of the agency’s code is somewhat antiquated and that hardware optimization (and code modernization for newer hardware) is similarly suboptimal.
“Even though C++ and Python and so on and so forth are increasing, a very significant amount [of our code] is still using Fortran,” he said. “Underlying all of this is the fact that there is a lack of budget—at least, at NASA—for attacking this problem. A lot of the budget goes to the science side, and as long as they’re getting some level of performance, there is no budget left for improving the performance of the cores, improving the portability of the code.”
He also said that NASA was having difficulty finding HPC specialists, with many professionals instead choosing to go into AI- and data-centric fields. “The expertise is not there,” Mehrotra lamented. “Not very many people are coming towards HPC.” He said that the agency was working to counter this by engaging developers through hackathons and other activities.