This year’s chair of the HPC Impact Showcase at the SC19 conference in Denver is Lori Diachin, who has spent her career at the spearhead of HPC. Currently deputy director for the U.S. Department of Energy’s (DOE) Exascale Computing Project (ECP), Diachin is also deputy associate director for science and technology in the Computation Directorate at Lawrence Livermore National Laboratory, which has about 1000 staff serving the laboratory in areas ranging from HPC, computing sciences and information technology for business and workforce enablement.
Diachin knows a thing or two about applying HPC to commercial industrial workloads, having run DoE’s HPC4Mfg (manufacturing) and HPC4Materials programs several years ago. Now, as chair of the Impact Showcase and in her roles at ECP and at Livermore, she’s experiencing firsthand the confluence of HPC and AI.
Of the HPC community and the explosion of AI over the past four years, she said, “it’s astonishing how quickly (it) has turned in that direction…, it’s a speed train.”
This is seen in the three-day agenda of next week’s HPC Impact Showcase, starting Tuesday, Nov. 19, focusing on industrial implementations of supercomputing and other advanced technologies. Seven of the series’ 15 sessions this year wilzl have AI- and machine learning-related content.
“The applications are all over the map,” Diachin said, “ranging from looking at machine learning for cybersecurity – that’s a project in collaboration with the San Diego Supercomputing Center (SDSC) and a company called Webroot, data analytics for understanding traffic congestion and using AI to improve the analysis, high efficiency engines, JP Morgan Chase looking at cybersecurity pattern matching, and we’ve got some folks from Wales looking at data analytics and genomics for HIV drug resistant; Total Oil and gas, and…digital pathology, looking at whole slide image analysis using AI.”
Generally, selection of showcase presentations is a submission-driven process, Diachin said, in which the reviewers look for examples of HPC making an impact on a particular industrial setting. Diachin, working with Suzy Tichenor (director, HPC Industrial Partnerships Program in the Computing and Computational Sciences Directorate at Oak Ridge National Laboratory), David Martin (manager, Industry Partnerships and Outreach at the Argonne Leadership Computing Facility at Argonne National Laboratory), said, “We didn’t want talks about a new technology offering a company has, we didn’t want talks from high performance computing centers that may be impacting industry, what we really wanted to hear from were the industrial partners themselves,” that is: the organizations using and benefiting from HPC.
This includes a presentation from GigaIO and the SDSC on the role of numerical simulation in analyzing and various physical phenomena. Storage I/O performance and network bandwidth have lagged behind improvements in compute power, creating a bottleneck for end-to-end simulation performance. Diachin said GigaIO has developed a new network technology that addresses this problem, and the presentation will describe how GigaIO partnered with SDSC to deploy a solution using earthquake simulation as a representative problem.
“We tried something new this year…,” Diachin said, “a tag-team approach. So my committee was primarily comprised of folks from HPC centers, either the DOE labs or universities, and most of us have industrial partnership projects, so we’re trying as a tag-team approach where the industrial partners will talk for 20 minutes about their problem and why it’s important, and then the lab or university partner will talk about what they did in particular w that partner to create an HPC solution that works for that company.”
Another tag-team presentation will feature a demonstration of how an IBM Blue Gene/Q supercomputer, called “Mira,” at Argonne National Lab was used to help Aramco Services, the Houston energy company, optimize a heavy-duty gasoline engine for improved efficiency. Using Mira, the team ran thousands of engine design combinations in a few days that would ordinarily require months on a typical cluster, enabling rapid evaluation of more variations. “And then they’re going to talk about taking all those simulations and using AI/machine learning techniques to analyze the output of the simulation results,” Diachin said.
While the convergence of HPC and AI has tremendous potential to take on grand challenges in industry and science, Diachin said, “I think there’s still a lot of work to be done on understanding some of the foundational elements…. When we look at it in the context of simulations, we’re doing a lot of foundational work on understanding how you put that kind of physics-based knowledge into some of these techniques… There’s huge potential and also some really important research that we need to tackle in the near-term to be sure we have solid foundations that were standing on.”