Lori Diachin to Lead the Exascale Computing Project as It Nears Final Milestones

By Tiffany Trader

May 31, 2023

The end goal is in sight for the multi-institutional Exascale Computing Project (ECP), which launched in 2016 with a mandate from the Department of Energy (DOE) and National Nuclear Security Administration (NNSA) to achieve exascale-readiness in the U.S. by the 2022 timeframe. The project couldn’t fully prove its mettle until at least one exascale machine was deployed and operational. HPCwire recently learned that is now definitively the case with the formal acceptance of Frontier at Oak Ridge National Laboratory (ORNL). Frontier is now open for early science workloads, including many of the 24 ECP target applications. These are real science codes in their own right, but also serve to evaluate the success of the ECP, which must hit certain Key Performance Parameters (KPPs) before it can successfully conclude.

Leading the ECP toward that finish line now is Lori Diachin, who has served as ECP’s deputy director since 2018, in addition to being principal deputy associate director for Lawrence Livermore National Lab’s Computing Directorate. Diachin is the third person to lead the ECP in its nearly 8 year span. She takes over for Doug Kothe, who adroitly steered the project for the last six years after taking the baton from Paul Messina, who was instrumental in launching the program. [Kothe is leaving his post at ORNL (where he was associate lab director for the Computing and Computational Systems Directorate, in addition to being ECP chief) to join Sandia National Laboratories on June 5 as chief research officer and associate labs director of Sandia’s Advanced Science and Technology Division.] Diachin maintains her position and employment at LLNL, but is spending a significant amount of time at the ORNL campus, in Oak Ridge, Tenn. She will be reporting to ORNL Interim Director Jeff Smith in her new capacity as ECP director.

Today, the day before her formal start date, I spoke with Diachin, whom I first met some years ago when she was involved in the HPC4 program, and got a hot-off-the-presses update. Diachin provided a status report both on the project itself – where it stands and what the next milestones are – and on a somewhat more personal note, shared how stepping into this role is a natural fit and extension of a rewarding career in scientific computing, most of it spent within the DOE lab fold. 

Here’s a transcript of that interview.

Tiffany Trader: Let’s start with why exascale is necessary and important, for science and for the U.S., can you give some elucidating examples?

Lori Diachin: Exascale is clearly going to give us a significant advantage in many different areas related to science and security. With respect to Lawrence Livermore (an NNSA lab), we have the stockpile that we certify every year through high performance computing, and El Capitan is going to be a major part of that story when it comes online. We have a partnership with, for example, the National Cancer Institute and the National Institutes of Health in the area of looking at RAS protein mechanisms for cancers, and RAS protein cancers are about 30% of all cancers. And so being able to understand a little bit better what those mechanisms are, can help us understand the disease mechanism. And similarly, that same project is looking at precision medicine, where they try to predict the particular ways in which medicines interact with a particular cancer or particular disease within a specific patient. And so exascale computing can really start to give us those types of advantages, and understanding through modeling and simulation. And it runs the gamut: modeling wind farms on realistic topographies, and wherever you have multiple turbines together interacting with each other, and how does that impact the output of the overall wind farm? So there’s just lots and lots of examples that we could talk about.

Trader: So some of that cancer research is under the CANDLE project, which is an ECP application is that coming to a conclusion?

Diachin: Yes, so they may continue the partnership beyond ECP, that’ll be up to the program sponsors at DOE to determine that. But in terms of the ECP, all of the ECP projects will be coming to an end. We are completing technical work at the end of December 2023. And at that point, we are wrapping up the ECP part, but a lot of the projects will be transitioning to related work at the various different stakeholder offices that they have.

Trader: You mentioned that December 31 is the concluding date for ECP?

Diachin: That will be the end of technical work and so all of the teams will need to wrap up their work on their key performance parameters and the applications and the software technologies, again, as part of ECP. A lot of those efforts, as I mentioned, are transitioning to new programs, new efforts, new projects, across the Department of Energy. But the ECP project itself is a formal DOE project, which does have an end date, and that’s December 31 for the technical work. We will do a final review in April. And then we are formally closing out the project, you know, there are a lot of mechanics that have to happen to close out the project. And that has to happen by next September 2024.

Trader: Will there be a public report?

Diachin: We’re working on a number of different communications for the ECP. One of the things we’re really excited about as a leadership team is working on a book proposal, where we want to talk about the lessons that we have learned, both from a technical perspective, but also from a perspective of, how do you collaborate on a project this large in computational science? It’s really one of the few mega-projects that we’ve seen in computational science. And so what are the lessons that we’ve learned that we hope would be useful, not only for projects that are this large – which, to be honest, there aren’t very many this large, right – but for smaller and mid-sized projects. And then also the lessons around project management? You know, we did a lot of work in applying formal project mechanisms to a research development and deployment project. And there were a lot of interesting lessons that we took away about the value of some of those practices in the environment for computational science. So we’re working on a book, we’re working on a series of high-level communications primarily targeted at the non technical audience. There’s also a series of podcasts in the works.

Trader: As far as the Key Performance Parameters (KPPs), will you only need to run KPPs on Frontier (ORNL) to have a successful conclusion for the ECP or is there a plan to include Aurora (ANL) and El Captain (LLNL)?

Diachin: We will take any KPP submissions from our technical teams between now and the end of technical work. So as we know, Aurora will be coming up; we’ll be getting early access here this summer, with fuller access planned in the fall timeframe. So where there is overlap with our teams, we are definitely making it a high priority for teams to get onto Aurora. As Doug [Kothe] said, we’re not counting on Aurora for the success of ECP, but a number of our software technology teams and application teams will be able to, and are being strongly encouraged to run in that environment, to demonstrate performance portability across multiple architectures and to demonstrate their challenge problems and their science on the Aurora system as well.

Trader: Are you already running some of the ECP codes on Sunspot (a “mini,” 2-rack, 128-node version of Aurora) and on Aurora itself? 

Diachin: All of ECP has access to Sunspot. So I would say a very large percentage, if not all, of our teams have gotten onto Sunspot and are using that to work through any issues that they can. That’s a much smaller scale system, but it is the [same] hardware and so they are working through the software. Any differences that come up between Aurora and Frontier with respect to the software stack, or how the GPUs are performing, etc. So all of our teams are working on that.

Trader: What stands out to you most as you’ve been a part of ECP in a leadership role since 2018, nearly five full years? What have you found particularly significant or satisfying?

Diachin: I think one of the things that we as a leadership team have found very satisfying is how much progress we can make as a community, when we have a large-scale funded effort where we have collaboration across software technologies, and application teams. And we’re able to really bring all of those elements together in a sustained way over a period of many years. So that allows us the time really, to take those advancements that are happening in software technologies, like advanced math libraries, and visualization and data science techniques, and really see them start to bear fruit in the applications, and gives that time for the application teams to provide feedback and that iteration and codesign process between software technologies and applications to really work. And so that’s one of the things that I personally have found the most satisfying is that we’re seeing that on a really large scale with ECP. And there have been programs that have tackled that in the past, like SciDAC, which I was a part of before I was a part of ECP, which have very similar motivations, but just the scale, and the ability to sustain it has been remarkable.

Trader: So the successes and achievements and progress that you’ve seen in ECP, it seems as if leading the Exascale Computing Project is a natural extension of your career trajectory.

Diachin: Oh, definitely. So I’ve been a part of the DOE family for 30 years, and have worked primarily with ASCR, for that time. I was at Argonne in the math and computer science division. You know Rick Stevens. I was part of that division when he was the division leader. And then, for that entire time, I’ve been very interested in these collaborative projects. I was one of the first PIs in the SciDAC program in 2000. And worked as a PI in a leadership role on projects in SciDAC up until I became the deputy here, and I was also involved in the HPC4 program, HPC for Energy Innovation.

Trader: That’s where we first met. I believe it was an HPC4Manufacturing meeting in San Diego some years ago.

Diachin: Yeah. So seeing those connections between … by training, I’m a mathematician, and so seeing the connections for what we can do with numerical methods and the software technologies we develop and the impact it can have. It’s something I’ve been interested in and working toward, you know, my entire career. And ECP has been particularly satisfying in that regard.


In a statement put out by LLNL, Diachin said much was owed to the two ECP directors who preceded her. “[Kothe’s] leadership in ECP’s application development portfolio, and later leading the project as a whole, have positioned this first-of-its-kind project to be tremendously successful,” Diachin said. “The DOE community owes him, and the original ECP director who guided this project from a concept into reality, Paul Messina, a large debt of gratitude for their leadership and service.”

Diachin further noted that “in the project’s history, ECP has engaged more than 1,000 researchers in the development and documentation of next-generation computational tools and applications, which will pay dividends for DOE and the nation for many years.” 

ORNL’s Ashley Barker will serve as Diachin’s deputy.

For additional details, see the official announcement from LLNL.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Edge-to-Cloud: Exploring an HPC Expedition in Self-Driving Learning

April 25, 2024

The journey begins as Kate Keahey's wandering path unfolds, leading to improbable events. Keahey, Senior Scientist at Argonne National Laboratory and the University of Chicago, leads Chameleon. This innovative projec Read more…

Quantum Internet: Tsinghua Researchers’ New Memory Framework could be Game-Changer

April 25, 2024

Researchers from the Center for Quantum Information (CQI), Tsinghua University, Beijing, have reported successful development and testing of a new programmable quantum memory framework. “This work provides a promising Read more…

Intel’s Silicon Brain System a Blueprint for Future AI Computing Architectures

April 24, 2024

Intel is releasing a whole arsenal of AI chips and systems hoping something will stick in the market. Its latest entry is a neuromorphic system called Hala Point. The system includes Intel's research chip called Loihi 2, Read more…

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Research senior analyst Steve Conway, who closely tracks HPC, AI, Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, and this day of contemplation is meant to provide all of us Read more…

Intel Announces Hala Point – World’s Largest Neuromorphic System for Sustainable AI

April 22, 2024

As we find ourselves on the brink of a technological revolution, the need for efficient and sustainable computing solutions has never been more critical.  A computer system that can mimic the way humans process and s Read more…

Shutterstock 1748437547

Edge-to-Cloud: Exploring an HPC Expedition in Self-Driving Learning

April 25, 2024

The journey begins as Kate Keahey's wandering path unfolds, leading to improbable events. Keahey, Senior Scientist at Argonne National Laboratory and the Uni Read more…

Quantum Internet: Tsinghua Researchers’ New Memory Framework could be Game-Changer

April 25, 2024

Researchers from the Center for Quantum Information (CQI), Tsinghua University, Beijing, have reported successful development and testing of a new programmable Read more…

Intel’s Silicon Brain System a Blueprint for Future AI Computing Architectures

April 24, 2024

Intel is releasing a whole arsenal of AI chips and systems hoping something will stick in the market. Its latest entry is a neuromorphic system called Hala Poin Read more…

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Resear Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire