June 26, 2023 — In the latest episode of the Let’s Talk Exascale Podcast, Scott Gibson speaks with computing pioneer Jack Dongarra.
An R&D staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory, Dongarra was recently elected to the National Academy of Sciences, or NAS, for his distinguished and continuing achievement in original research. In 2022, he received the ACM Turing Award from the Association for Computing Machinery. That honor recognized his innovative contributions to numerical algorithms and libraries that enabled high-performance computational software to keep pace with exponential hardware improvements for over four decades.
Dongarra is professor emeritus at the University of Tennessee, Knoxville, where he recently retired as founding director of UT’s Innovative Computing Laboratory, or ICL, an ECP collaborator. Along with his roles at ORNL and UT, he has served as a Turing Fellow at the University of Manchester in the United Kingdom since 2007. He earned a bachelor’s degree in mathematics from Chicago State University, a master’s in computer science from the Illinois Institute of Technology, and a doctorate in applied mathematics from the University of New Mexico.
Jack Dongarra is a fellow of the ACM, the Institute of Electrical and Electronics Engineers, the Society of Industrial and Applied Mathematics, the American Association for the Advancement of Science, the International Supercomputing Conference, and the International Engineering and Technology Institute. Additionally, he has garnered multiple honors from those organizations. He is also a member of the National Academy of Engineering, and a foreign member of the British Royal Society.
Scott: So first of all, thanks for joining me. Thanks for being on the program. The end of the Exascale Computing Project is in sight, with the technical work wrapping up in December of this year. This has been quite a journey. And ECP teams have developed a software ecosystem for exascale. They’ve provided scientists with very versatile tools. Will you share your perspective on how the project has progressed over its lifetime? And please tell us what you’ve observed from the vantage point afforded by the participation of the Innovative Computing Laboratory at the University of Tennessee.
Jack: Sure. Let me first say thanks for the opportunity to be on the show here. Let me just say that the end of Exascale [Computing Project] is really both a success and a huge risk. The project has delivered great capabilities to the Department of Energy, both in terms of human and technical accomplishments. But now, however, the DOE is highly vulnerable to losing the knowledge and skill of this trained staff, as future funding is unclear.
So ECP is ending, and there’s no follow-on project, no follow-on to roughly 1,000 people, 800 at the DOE labs and around 200 at universities, who have been engaged in ECP. And it’s really been a terrific project from the standpoint of getting together with application people designing algorithms, and software people working together on this common vision of working towards exascale. And hardware vendors have been involved in that as well.
Today, without funding, those 1,000 people are really uncertain about their future, and that uncertainty generates great anxiety among lab staff. When I talk to people at the labs, I sense that anxiety, in particular junior researchers who really have started almost their career based on this project, which has been going 7 years. And we don’t have a follow-on project at the scale that would be able to use their talents.
In some sense, we’ve not really accomplished the end of this project in a very satisfying way. The project is ending; we’ve delivered exascale machines. We have applications running on at least one of those machines today and showing very impressive results. But, you know, the follow-on isn’t there.
In 2019, the DOE, under the Advanced Scientific Computer organization, put together a set of workshops/town halls that were supposed to address AI for science, energy, and security. Those were well attended. Reports were written that discussed the challenges and how to overcome some of them. And then I guess what happened next was COVID. So the pandemic happened—that slowed everything down. Things didn’t really get as much traction as they probably should have. And in some sense, we haven’t recovered from that.
There’s a great deal of effort going on behind the scenes. Many colleagues are trying to work with DOE and Congress to put together a plan. I know Rick Stevens has put together a plan for AI for science, energy, and security. But that’s something that’s going to take time before funds can be appropriated, and the program actually put together. The unfortunate part is that the exascale computing program is about to end, and there’s no follow-on project at that scale that would be able to engage those people; so that’s really the crisis.
One thousand people have been devoted to putting together the ECP program. And that’s about to end, with about 6 months left in the program before that thing hits the wall. With the uncertainty there, and with many other opportunities for people with the talents that ECP put together, I’m sure they would unfortunately find jobs and other areas. The cloud vendors are seeking just this kind of talent to move them forward.
So it’s been a great, great success. On one hand, it’s been very challenging—we always like challenging problems. I think we’ve put in place solutions for many of the issues that we had. And we see great promise for the future in terms of using those exascale machines and the applications. The unfortunate part is that we don’t have a way to retain the talent, the cadre of scientists who are well educated and well trained who can continue on with the program and scientific computing for DOE.
Scott: You’ve said that pursuing exascale computing capability is all about pushing back the boundaries of science to understand things that we could not before. In what ways do you believe ECP has put the right sophisticated tools in place to reach that objective?
Jack: Now, one of the nice things working on this project is that adequate funding was there to develop applications and software; of course we can always use more. But in this case, there was a substantial amount of funding put in place to target 21 applications. The whole point of the ECP project is about the science, and those 21 applications were identified. They’re all energy related—wind energy, dealing with carbon capture, dealing with nuclear energy, protons science, chemistry, QCD, astrophysics, and the list goes on. And those exascale computers were put in place to help meet the challenges of those applications and push back the boundaries of science for those applications.
So part of the money went for the applications; another sizeable amount of money went for the algorithms and software. The software stack for ECP has, I think, 84 projects on it, and they cover a whole range of things, from some core things that are needed to run on those exascale machines, to compiler support, to numerical libraries, to tools and technologies, to developing this software development stack that’s been put place, dealing with many of the major components that are used in applications, visualization, being able to minimize communication, doing checkpoints, and providing a larger ecosystem for exascale.
Those 84 projects are being worked on and they’re coming to conclusion. They’re being worked on at the labs and the universities, trying to, again, meet the challenge of developing components that will run at a reasonable rate at scale on those exascale machines.
You know, it’s really been a pleasure working with colleagues in different areas helping to put together those tools. For my group in Tennessee, we’re working on six components of that software stack. We’re working on the numerical library for linear algebra called SLATE. We’re working on a numerical set of routines that have worked for GPUs called MAGMA, working on some iterative solvers in a project called Ginkgo, working on some performance tools called PAPI. We’re working on development of some programming aids that will help you effectively use the large amount of parallel processing that we called PaRSEC. And we’ve been working on OpenMPI for a long time, providing the basic fabric under which all of these applications and software will run on those exascale machines.
So, it’s been an engaging project for the last 7 years. It’s been a project that I think has developed many very worthwhile components. It’s been very rewarding from the standpoint of the application scientists and the software developers having adequate resources to really invest in that. And then seeing those tools be used or picked up in applications and driving those applications to get much higher levels of performance than we had in the previous generation of machines. It’s almost something that I would consider a highlight of my career, working with the DOE, putting together software, putting them in place so that the applications can effectively use them.
Scott: That’s saying a lot … a highlight from your career. Has the Department of Energy ever done anything like ECP before? To my knowledge, they haven’t.
Jack: Yeah, this is really something of a first in some sense. They’ve done things, of course, at a lower, smaller level. But this is the first at such a broad level. Basically, the whole ECI project, the Exascale Computing Initiative, was to develop these three exascale machines [Frontier, Aurora, and El Capitan] and then put in place the applications, the algorithms, and the software. So, the ECP part of that is the $1.8 billion that was devoted to those areas. The whole ECI was about $4 billion over the 7 years, and that was purchasing the hardware, or putting in place the hardware that can be used to solve those very challenging science projects.
This is the first time in my career where I’ve been engaged in a project with 1,000 people working on it for that one goal of developing tools and applications for those science problems, putting in place the hardware that can effectively deal with it, and putting in place a whole software stack that can be used across those applications. So it really is a great project. It was a great project. It is a great project. There’s many accomplishments, and the unfortunate part is there’s nothing to follow on.
Scott: You mentioned the science being the focus of the work—that’s what it’s all about. With respect to what you just said, is there more you could say about the uniqueness of ECP in terms of the magnitude of its accomplishments and its importance to science?
Jack: Well, again, getting 1,000 people on the same page working with a common goal of developing those applications. And putting in place the infrastructure…
Click here to access the full transcript.
Source: Scott Gibson, ECP