Simulating Combustion at Exascale: a Q&A with ISC Keynoter Jacqueline Chen

By Nages Sieslack, ISC Group

March 14, 2016

Dr. Jacqueline H. Chen is a distinguished member of technical staff at the Combustion Research Facility, Sandia National Laboratory in Livermore. Her primary field of research is computational combustion, which relies on high-fidelity combustion simulations to develop accurate predictive combustion models, which will be used to design more fuel-efficient, cleaner-burning vehicles, planes and power plants in the future.

The 2016 ISC High Performance conference has invited Chen to keynote on Tuesday, June 21, on the topic of advancing the science of turbulent combustion using petascale and exascale simulations. The ISC Communications team caught up with Chen to find out more about combustion simulations and thus create more awareness for her research among a broad HPC audience.

ISC: What’s the thrust of your work and research at Sandia?

Jacqueline H. Chen: I am a computational combustion scientist at the Combustion Research Facility at Sandia. My work focuses on the development and application of a first principles direct numerical simulation approach to study fundamental ‘turbulence-chemistry’ interactions. The simulations are based on simple, laboratory configurations designed to isolate and elucidate underlying phenomena that may be present in real engines for transportation and power generation. These unit problems provide both new fundamental combustion science and validation data for the development of predictive models that will ultimately be used to design future fuel-efficient, clean engines.

I also lead a DOE ASCR sponsored Exascale Co-design Center, ExaCT (http://www.exactcodesign.org), a multi-disciplinary team of computer scientists, applied mathematicians and computational combustion scientists. The mission of ExaCT is to co-design all aspects of combustion simulation including numerical algorithms for partial differential equations, programming and execution models, scientific data management and analytics for in situ uncertainty quantification and graph-based topological analysis, and architectural simulations that explore hardware tradeoffs with combustion applications.

ISC: Can you give us a sense of how combustion simulation codes have impacted commercial engine and power plant designs thus far?

Chen: Recently, Cummins has used Reynolds-Averaged Navier-Stokes (RANS) models, which solves the time-averaged equations of motion for a fluid, to design heavy duty truck engines saving 10 to 15 percent in the development time and cost at the same time making the engine 10 percent more efficient.

In the future, industry will shift towards large-eddy simulation (LES), a more accurate and computationally intensive approach which resolves the energy-containing eddies and models turbulence and combustion at finer scales where energy and heat dissipate. LES will be used to capture cycle-to-cycle variability inherent in engines – which can lead to misfire for example — which RANS has difficulty capturing. Discovery and use-inspired computational research performed on the world’s largest supercomputers, in tandem with experiment and theory, is still needed, however, to develop predictive LES models in complex combustion regimes where future engines have to operate.

ISC: What will exascale systems do for combustion simulation codes that could not be achieved with petascale systems?

Chen: Exascale systems will enable fundamental high-fidelity combustion simulations capturing a larger dynamic range of turbulence scales, operating at higher pressure, and including a larger number of combustion compounds representative of large hydrocarbons and biofuels.

It will also enable more complex multi-physics including sprays, particulates and thermal radiation to be incorporated into these simulations. These high-fidelity simulations will be carefully designed to shed light on important underlying combustion science that is currently poorly understood and inspired by real applications. These particularly apply to low-temperature ignition processes in sprays coupled with turbulent mixing at high pressure or emissions characteristics in turbulent flames propagating into auto-igniting mixtures.

The massive data generated from these simulations, combined with experiments, will be used by scientists and engineers in academia and industry to develop and test new predictive models that work in more challenging combustion regimes, which future combustors will have to operate to realize gains in efficiency and to lower emissions.

ISC: Do you foresee a significant rewrite of legacy combustion simulation codes in order to take advantage of exascale machines?  If so, who will end up doing that work?

Chen: Current petascale combustion simulation codes will have to be rewritten in order to take advantage of exascale machines. Current combustion simulation codes are written largely in a bulk synchronous programming approach which will not work at the exascale.  Driven by power constraints, and the consequent challenges in resilience, and energy costs associated with data movement, exascale combustion codes will need to be rewritten.  In response to these challenges, programming and execution models that tolerate asynchrony are needed along with new mathematical algorithms that minimize data movement and are inherently asynchronous.

Future predictive computational design tools for advanced combustion systems must be able to discern differences in physical and chemical properties of different fuels and couple that with the dynamic behavior of a combustor operating at high pressure and in highly turbulent environments. The numerical methodology needs to incorporate adaptive mesh refinement in the solution of large systems of partial differential equations with trillions of degrees of freedom to treat disparities in scales between flames and turbulence at high pressure. The core solver methodology is only one component of the required methodology. Disparity in growth rates of I/O systems and storage relative to compute throughput necessitate a full exascale workflow capability; current practice of archiving data for subsequent analysis will not be viable at the exascale. This full workflow also needs to support a wide range of in-situ analysis and uncertainty quantification methodologies.

The development of such a complex computational capability is most effectively achieved through combustion application co-design process involving an interdisciplinary team of computer scientists, applied mathematicians and computational combustion scientists. This team will work closely together to ensure that the future software stack, including new asynchrony-tolerant math algorithms for describing turbulent combustion, will work effectively on exascale hardware.

ISC: Will combustion codes have a major impact on co-design efforts? In particular, what hardware features are most important to these workloads?

Chen: Combustion codes have and continue to make a significant impact on co-design efforts across the entire stack — from mathematical algorithms for combustion simulation that reflect characteristics of future exascale architectures to asynchronous task-based programming and execution models that can adapt to node and system level non-uniformities, to numerous hardware features that support the end-to-end workflow of combustion simulations. Some of the hardware features identified through co-design that are most important to combustion workloads include larger register files, larger L1 caches for data reuse close to the processor core, fast interconnects for algebraic multigrid solvers used in low-Mach adaptive mesh refinement, software and hardware support for tasking-based programming models, and NVRAM and burst buffers to support complex and data-intensive interaction and data-exchange patterns, as well as managing data flow across complex storage hierarchies.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Digging into the Atos-Nimbix Deal: Big US HPC and Global Cloud Aspirations. Look out HPE?

August 2, 2021

Behind Atos’s deal announced last week to acquire HPC-cloud specialist Nimbix are ramped-up plans to penetrate the U.S. HPC market and global expansion of its HPC cloud capabilities. Nimbix will become “an Atos HPC c Read more…

Berkeley Lab Makes Strides in Autonomous Discovery to Tackle the Data Deluge

August 2, 2021

Data production is outpacing the human capacity to process said data. Whether a giant radio telescope, a new particle accelerator or lidar data from autonomous cars, the sheer scale of the data generated is increasingly Read more…

Verifying the Universe with Exascale Computers

July 30, 2021

The ExaSky project, one of the critical Earth and Space Science applications being solved by the US Department of Energy’s (DOE’s) Exascale Computing Project (ECP), is preparing to use the nation’s forthcoming exas Read more…

What’s After Exascale? The Internet of Workflows Says HPE’s Nicolas Dubé

July 29, 2021

With the race to exascale computing in its final leg, it’s natural to wonder what the Post Exascale Era will look like. Nicolas Dubé, VP and chief technologist for HPE’s HPC business unit, agrees and shared his vision at Supercomputing Frontiers Europe 2021 held last week. The next big thing, he told the virtual audience at SFE21, is something that will connect HPC and (broadly) all of IT – into what Dubé calls The Internet of Workflows. Read more…

How UK Scientists Developed Transformative, HPC-Powered Coronavirus Sequencing System

July 29, 2021

In November 2020, the COVID-19 Genomics UK Consortium (COG-UK) won the HPCwire Readers’ Choice Award for Best HPC Collaboration for its CLIMB-COVID sequencing project. Launched in March 2020, CLIMB-COVID has now resulted in the sequencing of over 675,000 coronavirus genomes – an increasingly critical task as variants like Delta threaten the tenuous prospect of a return to normalcy in much of the world. Read more…

AWS Solution Channel

Data compression with increased performance and lower costs

Many customers associate a performance cost with data compression, but that’s not the case with Amazon FSx for Lustre. With FSx for Lustre, data compression reduces storage costs and increases aggregate file system throughput. Read more…

KAUST Leverages Mixed Precision for Geospatial Data

July 28, 2021

For many computationally intensive tasks, exacting precision is not necessary for every step of the entire task to obtain a suitably precise result. The alternative is mixed-precision computing: using high precision wher Read more…

Digging into the Atos-Nimbix Deal: Big US HPC and Global Cloud Aspirations. Look out HPE?

August 2, 2021

Behind Atos’s deal announced last week to acquire HPC-cloud specialist Nimbix are ramped-up plans to penetrate the U.S. HPC market and global expansion of its Read more…

How UK Scientists Developed Transformative, HPC-Powered Coronavirus Sequencing System

July 29, 2021

In November 2020, the COVID-19 Genomics UK Consortium (COG-UK) won the HPCwire Readers’ Choice Award for Best HPC Collaboration for its CLIMB-COVID sequencing project. Launched in March 2020, CLIMB-COVID has now resulted in the sequencing of over 675,000 coronavirus genomes – an increasingly critical task as variants like Delta threaten the tenuous prospect of a return to normalcy in much of the world. Read more…

What’s After Exascale? The Internet of Workflows Says HPE’s Nicolas Dubé

July 29, 2021

With the race to exascale computing in its final leg, it’s natural to wonder what the Post Exascale Era will look like. Nicolas Dubé, VP and chief technologist for HPE’s HPC business unit, agrees and shared his vision at Supercomputing Frontiers Europe 2021 held last week. The next big thing, he told the virtual audience at SFE21, is something that will connect HPC and (broadly) all of IT – into what Dubé calls The Internet of Workflows. Read more…

IBM and University of Tokyo Roll Out Quantum System One in Japan

July 27, 2021

IBM and the University of Tokyo today unveiled an IBM Quantum System One as part of the IBM-Japan quantum program announced in 2019. The system is the second IB Read more…

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make it seem like it's two nodes behind? For Intel, the response was to change how it refers to its nodes with the aim of better reflecting its positioning within the leadership semiconductor manufacturing space. Intel revealed its new node nomenclature, and... Read more…

Will Approximation Drive Post-Moore’s Law HPC Gains?

July 26, 2021

“Hardware-based improvements are going to get more and more difficult,” said Neil Thompson, an innovation scholar at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL). “I think that’s something that this crowd will probably, actually, be already familiar with.” Thompson, speaking... Read more…

With New Owner and New Roadmap, an Independent Omni-Path Is Staging a Comeback

July 23, 2021

Put on a shelf by Intel in 2019, Omni-Path faced a uncertain future, but under new custodian Cornelis Networks, OmniPath is looking to make a comeback as an independent high-performance interconnect solution. A "significant refresh" – called Omni-Path Express – is coming later this year according to the company. Cornelis Networks formed last September as a spinout of Intel's Omni-Path division. Read more…

Chameleon’s HPC Testbed Sharpens Its Edge, Presses ‘Replay’

July 22, 2021

“One way of saying what I do for a living is to say that I develop scientific instruments,” said Kate Keahey, a senior fellow at the University of Chicago a Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Berkeley Lab Debuts Perlmutter, World’s Fastest AI Supercomputer

May 27, 2021

A ribbon-cutting ceremony held virtually at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC) today marked the official launch of Perlmutter – aka NERSC-9 – the GPU-accelerated supercomputer built by HPE in partnership with Nvidia and AMD. Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer called Dojo to process truly vast amounts of video data. It’s a beast! … A truly useful exaflop at de facto FP32.” Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months after Red Hat deprecated its support for the widely popular, free CentOS server operating system. The Rocky Linux development effort... Read more…

Iran Gains HPC Capabilities with Launch of ‘Simorgh’ Supercomputer

May 18, 2021

Iran is said to be developing domestic supercomputing technology to advance the processing of scientific, economic, political and military data, and to strengthen the nation’s position in the age of AI and big data. On Sunday, Iran unveiled the Simorgh supercomputer, which will deliver.... Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

Leading Solution Providers

Contributors

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

Microsoft to Provide World’s Most Powerful Weather & Climate Supercomputer for UK’s Met Office

April 22, 2021

More than 14 months ago, the UK government announced plans to invest £1.2 billion ($1.56 billion) into weather and climate supercomputing, including procuremen Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

Q&A with Jim Keller, CTO of Tenstorrent, and an HPCwire Person to Watch in 2021

April 22, 2021

As part of our HPCwire Person to Watch series, we are happy to present our interview with Jim Keller, president and chief technology officer of Tenstorrent. One of the top chip architects of our time, Keller has had an impactful career. Read more…

Senate Debate on Bill to Remake NSF – the Endless Frontier Act – Begins

May 18, 2021

The U.S. Senate today opened floor debate on the Endless Frontier Act which seeks to remake and expand the National Science Foundation by creating a technology Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire