Delivering an exascale-ready software infrastructure is the Exascale Computing Project’s mission, a goal towards which it has steadily marched although ECP doesn’t grab headlines the way pre- and planned exascale systems do. This week ECP posted an interview with ECP director Doug Kothe talking about the accountability DOE imposes as well a ECP’s approach to developing software development kits (SDKs) able to cope with exascale’s varying heterogeneous architectures and a growing number of accelerator options.
Here’s a soundbite: “[I]n our Software Technology portfolio led by Mike Heroux—we’re not narrowing down to one particular programming model. We do see a very diverse accelerated node ecosystem coming, and we think that’s good for the community and good for us, meaning not just one type of accelerator but multiple types of accelerators, say, from Nvidia, AMD, and Intel.
“And so that’s really forcing us—and I think this is for the good of the community and moving forward—to have a diverse, robust software stack that can enable applications to, ideally, seamlessly port and get performance on multiple GPUs. This is a very difficult and daunting task, but we’re now really getting into the details of how to develop whether it’s abstraction layers or push for certain programming models that best allow our applications to achieve performance on these different types of accelerators.”
The latest podcast by ECP in which Kothe (on left) talks with Mike Bernhardt, ECP communication director, about process and product is a good listen.
So far ECP has roughly seventy different unique products in its Software Technology portfolio.
Many of these products, noted Kothe, have similar functionalities and by grouping these together, such as in programming models or in math libraries or in I/O or in DataVis, ECP is working to ensure interoperability, what Kothe calls a “nice sort of horizontal integration, meaning applications can ideally plug and play some of these techniques.”
“The requirement for an application is to not swallow an SDK whole. An SDK in Math Libraries might contain right now, say, a dozen different types of math libraries. But by being able to pull in an SDK, now say an application can literally plug and play and test different types of math libraries, maybe sparse linear solvers or dense solvers or Eigensolvers or whatever. And so it’s going to be a tremendous advantage for applications in the HPC and the software community in general to be able to have these things containerized and put together,” said Kothe.
“The SDKs roll up into what we call the Extreme-scale Scientific Software Stack, or E4S. And we’ve released several versions of E4S; if you go to E4S.io, our latest release, 1.0, occurred in November, last fall. That release has fifty different full-release products, and I think a half dozen partial-release products out there for folks to try in four different types of containers. And we’re really optimistic, and we’re really seeing the returns on our investment in doing things like this, not just for ECP but the community at large, both nationally and internationally. So that’s a key responsibility of ECP, to ensure what I’ll claim is better software quality, better robustness, better interoperability. That’s going to benefit us all.”
No surprise, abstraction looms large in ECP programing model preferences and in building SDKs. In particular, the Kokkos and Raja project are prominent players.
As described by ECP, “Exascale systems are characterized by computer chips with a large number of cores, a smaller amount of memory, and a range of various architectures, which can result in decreased productivity for library and application developers who need to write specialized software for each system. The Kokkos/RAJA project provides high-level abstractions for expressing the necessary parallel constructs that are then mapped onto a runtime to achieve portable performance across current and future architectures, freeing developers who adopt these technologies of the burden of writing specialized code for each system.”
Progress to date:
- The Kokkos team developed a parallel programming model with flexible enough semantics that it can be mapped on
a diverse set of exascale architectures including current multi-core CPUs and massively parallel GPUs.
- The Kokkos library implementation consists of a portable Application Programming Interface (API) and architecture-specific backends, including OpenMP, Intel Xeon Phi, and CUDA on NVIDIA GPUs.
- The RAJA team produced a collection of C++ software abstractions that enable architecture portability for exascale applications using standard C++11 features and provided support for multiple backends including OpenMP, CUDA, Intel TBB, and AMD GPUs.
- The Kokkos/RAJA team developed training material and held training events to enable adoption of their abstractions.
Said Kothe, “We’re finding in our application projects—we have twenty-four projects that really map to almost fifty different, separate, distinct codes. Order of fifteen or sixteen have already said, “We’re committing to these abstraction layers.” We’re also seeing the vendors do the same, which is, “Hey, we’re going to make sure that Kokkos and RAJA are not only ported but performant for you.” In other words, they’re working closely with us to make sure that those aren’t high-risk bets that the applications make, but lower-risk bets, meaning they’re going to be there. They’re going to be not just ported but performant.”