American competitiveness, particularly on the modeling and simulation front, has been a key initiative with a lot of lip service in the last decade. Several facilities, including the Ohio Supercomputer Center, have lent helping hands to bring HPC to industry–and fresh efforts are springing up, including at Lawrence Livermore National Laboratory (LLNL).
The difference between what centers like Ohio’s and LLNL’s are doing is truly a matter of scale. Those selected as industrial users will have a crack at a 5 petafopper that sits at number 8 on the recently-updated Top 500 supercomputer list.
With 390,000 cores and a new host of commercial applications to tweak, LLNL is providing a much-needed slew of software and scaling support. The lab is lining up participants to step to the high-core line to see how more compute horsepower can push modeling and simulation limits while solving specific scalability issues.
HPC Innovation Center Director, Fred Streitz says that Vulcan offers “a level of computing that is transformational, enabling the design and execution of studies that were previously impossible, opening opportunities for new scientific discoveries and breakthrough results for American industries.”
“It’s common for us to have people come to us because they’re hitting the limits on what they can do with commercial codes,” says Streitz. “It’s taking them too long to get answers or they want to model and simulate a large enough system with enough physics and they want to understand what the ROI would be to acquire more computing power.”
In other words, the project isn’t about providing supported access to high-end resources as a “gimme” in the name of competitiveness, it’s about convincing potential users that their investment in high performance computing is worth the cost and effort. It’s a matter of going from a workstation/departmental cluster approach to modeling in two dimensions to hitting warp drive with a fully realized 3D high-res model. The idea is that the implications for competitiveness could be big enough to trip the trigger in favor of a massive investment in HPC systems–but of course that kind of core warp-drive comes with some practical challenges on the software side.
The lab and IBM are providing software support to help raise code to the Top 10 system bar, which is the real emphasis on the effort. In many ways, the hardware is the easy (and expensive) part of the process–it’s the software angle that has LLNL researchers scrambling for solutions.
The BlueGene system presents a specific architecture framework that the code is being tweaked for, however. While it’s true that some of the code is tweaked to suit the IBM architecture, Streitz says in general, tackling the scalability issues making the hundreds-to-thousands of cores jump offers solutions that are machine agnostic.
When asked about the viability and usefulness of other architectures and approaches, including GPU acceleration (since that could be a prime fit on many of the modeling/sim applications), Streitz said that there is curiosity, but for now it’s a matter of getting businesses to new scale.
LLNL and its industrial HPC partners have already wrapped up six projects that cross the academia-commercial border via the LLNL HPC4Energy incubator program. The HPC Innovation Center is now connecting more users with on-demand proprietary access to Vulcan and throwing in the support of LLNL computer scientists and engineers to solve the pressing problem of dramatic scaling.
There are a number of companies that have already tapped Vulcan’s high core counts and the in-house software expertise, including General Electric’s Energy Consulting division, which will be amping up its PSLF simulation performance and capability and Bosh, which is targeting simulations of novel internal combustion engines.
These are large companies with existing, sizable clusters and in-house software resources of their own. While it might boost American competitiveness to have access to advanced modeling and simulation with Vulcan, smaller companies require the same opportunities. Streitz pointed us to smaller organizations that are also using Vulcan for the same purposes, including Potter Drilling, which will be improving their thermal spallation drilling processes with advanced simulation.
The system will still serve lab-specific needs through LLNL’s High Performance Computing Innovation Center as well as chew on Department of Energy and National Nuclear Security Administration projects.