A CFD code used for combustion simulation just reached a major scalability milestone, thanks to the work of Intelligent Light and partners at Georgia Tech and Lawrence Berkeley National Laboratory. The AVF-Leslie combustion simulation code (a derivative of Georgia Tech’s Leslie3D solver code) harnessed 64,000 cores on supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center. The accomplishment was credited to the expertise of the collaborators who, with DOE funding, developed scalable analysis methods in combination with in situ infrastructure to achieve “extreme scale knowledge discovery.”
“As the computing landscapes shifts to high-performance clusters, integrating post-processing with CFD solvers presents an opportunity to create a truly scalable workflow,” said Steve M. Legensky, general manager and founder at Intelligent Light in a prepared statement. Intelligent Light develops and maintains the open source VisIt application and Libsim, an interface for in situ applications. Both codes were developed by the DOE.
So how did the AVF-Leslie code make it so far past the 5,000 core mark recorded in previous runs? Intelligent Light explains that the partners instrumented the code with VisIt/Libsim to enable in situ extraction of surfaces of interest. Then the extracted XDB files files undergo secondary processing using FieldView, Intelligent Light’s CFD post-processing tool. “XDBs,” say Intelligent Light “retain full numerical fidelity, enable both automated report generation and interactive exploration and can be used for archives.”
Legensky notes that researchers had previously scaled the VisIt code to use 98,000 cores on the LLNL BlueGene/Q systems, but integration with a sophisticated physics code like AVF-Leslie ups the difficulty.
The need for practical extreme-scale CFD has led to development of in situ solutions, as Intelligent Light explains:
“When running simulations using thousands of cores however, the time to write, re-read and post-process the resulting files using traditional volume-based post-processing is impractical or impossible. When results are not reviewed and desired simulation runs not performed due to these limitations, the cost is wasted computing resources and lost science. In situ methods enable analysis of full spatiotemporal resolution data while it is still resident in memory, thereby avoiding the costs associated with writing very large data files to persistent storage for subsequent, post hoc analysis.”
The main focus right now is on integration solvers with in situ methods at extreme scale. It’s a goal that Intelligent Light shares with the DOE, which is seeking to address I/O concerns as the exascale era draws closer.
“Today we see widening gaps between compute performance and I/O capability and in situ analysis is a key part of the solution. As we move toward the exascale regime, we will see 3 orders of magnitude increase in FLOPs performance while at the same time seeing only 2-3 times more I/O performance,” says Wes Bethel, Senior Computer Scientist at Lawrence Berkeley National Laboratory.
By moving data analysis and visualization into solvers as they run, information can be extracted and written out in a more concentrated form, minimizing the demand on I/O bandwidth and storage.
The DOE has selected Intelligent Light to be part of team — along with Lawrence Berkeley National Laboratory, Kitware, Georgia Tech and Argonne National Laboratory — tasked with developing production quality software tools that maintain coherency across tens to hundreds of thousands of processor cores.
Intelligent Light says that the integration of HPC and in situ methods is already paying dividends for combustion research. As simulations become more powerful, phenomena can be explored that were once out-of-bounds.