SC13 Research Highlight: Petascale DNS of Turbulent Channel Flow

By Myoungkyu Lee, Nicholas Malaya, and Robert Moser

November 15, 2013

Whether a car on the highway, a plane flying through the air, or a ship in the ocean, all of these transport systems move through fluids. And in nearly all cases, the fluid flowing around these vehicles will be turbulent.

With over 20% of global energy consumption expended on transportation, the large fraction of the energy expended in moving goods and people that is mediated by wall-bounded turbulence is a significant component of the nation’s energy budget. However, despite the energy impact, scientists do not possess a sufficiently detailed understanding of the physics of turbulent flows to permit reliable predictions of the lift or drag of these system.

In order to probe the physics of wall-bounded turbulent flows, a team of scientists at the University of Texas are conducting the largest ever Direct Numerical Simulation (DNS) of wall-bounded turbulence at Ret = 5200. With 242 billion degrees of freedom, this simulation is fifteen times larger than the previously largest channel DNS of Hoyas and Jimenez, conducted in 2006.

In a DNS of turbulence, the equations of fluid motion (the Navier-Stokes equations) are solved, without any modeling, at sufficient resolution to represent all the scales of turbulence. In general, the full three-dimensional data fields of turbulent flow are difficult to obtain experimentally. On the other hand, computer simulations provide exquisitely detailed and highly reliable data, which have driven a number of discoveries regarding the nature of wall-bounded turbulence.

However, the use of DNS to study high speed flows has been hindered by the significant computational expense of the simulations. Resolving all the essential scales of turbulence introduces enormous computational and memory requirements, requiring DNS to be performed on the largest supercomputers. For this reason, DNS is a challenging HPC problem, and is a commonly used application to evaluate the performance of Top-500 systems. Due to the great expense of running a DNS, improving efficiencies in computation allows the simulation of more realistic scenarios (higher Reynolds numbers and larger domains) than would otherwise be possible.

Vortex_structure
Vortex visualization of turbulent flow

M.K.(Myoungkyu) Lee, the lead developer of the new DNS code used in the simulations, will present the results of numerous software optimizations during the Extreme-Scale Applications Session at SC13, on Tuesday, Nov 19th, 1:30PM – 2:00PM. The presentation will detail scaling results across a variety of Top-500 platforms, such as the Texas Advanced Computing Center’s Lonestar and Stampede, the National Center for Supercomputing Applications’ Blue Waters, and Argonne Leadership Computing Facility’s Blue Gene/Q Mira, where the full scientific simulation was conducted.

The results demonstrate that performance is highly dependent on characteristics of the communication network and memory bandwidth, rather than single core performance. On Blue Gene/Q, for instance, the code exhibits approximately 80% strong scaling parallel efficiency at 786K cores relative to performance on 65K cores. The largest benchmark case uses 2.3 trillion grid points and the corresponding memory requirement is 130 Terabytes.

The code was developed using Fourier spectral methods, which are typically preferred for turbulence DNS because of the superior resolution properties, despite the resulting algorithmic need for expensive communication. Optimization was performed to address several major issues: efficiency of banded matrix linear algebra, cache reuse and memory access, threading efficiency and communication for the global data transposes.

A special linear algebra solver was developed, based on a custom matrix data structure in which non-zero elements are moved to otherwise empty elements, reducing the memory requirement by half, which is important for cache management. In addition, it is found that compilers inefficiently optimized the low-level operations on matrix elements for the LU decomposition. As a result, loops were unrolled by hand to improve reuse of data in cache.

Streamwise velocity (sides) and wall-shear stress (top) of turbulent flow between two parallel plates
Streamwise velocity (sides) and wall-shear stress (top) of turbulent flow between two parallel plates

FFTs, on-node data reordering and the time advance were all threaded using OpenMP to enhance single-node performance. These were very effective, with the code demonstrating nearly perfect OpenMP scalability (99%).

The talk will also discuss how replacing the existing library for 3D global Fast Fourier Transforms (P3DFFT) with a new library developed using the FFTW 3.3-MPI library and lead to substantially improved communication performance.

The full scientific simulation used 300 million core hours on ALCF’s BG/Q Mira from the Department of Energy Early Science Program and the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) 2013 Program. Each restart file generated by the simulation is 1.8 TB in size, with approximately eighty such files archived for long term postprocessing and investigation. Postprocessing this large an amount of data is also a supercomputing challenge.

Presentation Information

Title : Petascale Direct Numerical Simulation of Turbulent Channel Flow on up to 786K Cores

Location : Room 201/203

Session : Extreme-Scale Applications

Time :  Tuesday, Nov 19th, 1:30PM – 2:00PM

Presenter : M.K.(Myoungkyu) Lee

SC13 Scheduler : http://sc13.supercomputing.org/schedule/event_detail.php?evid=pap689

About

M.K. (Myoungkyu) Lee is a Ph.D student in Department of Mechanical Engineering at the University of Texas at Austin.
mk@ices.utexas.edu

Nicholas Malaya is a researcher in the Center for Predictive Engineering and Computational Sciences (PECOS) within the Institute for Computational Engineering and Sciences (ICES) at The University of Texas at Austin.
nick@ices.utexas.edu

Robert D. Moser holds the W. A. “Tex” Moncrief Jr. Chair in Computational Engineering and Sciences and is professor of mechanical engineering in thermal fluid systems. He serves as the director of the ICES Center for Predictive Engineering and Computational Sciences (PECOS) and deputy director of the Institute for Computational Engineering and Sciences(ICES).
rmoser@ices.utexas.edu

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

AI Thought Leaders on Capitol Hill

July 14, 2018

On Thursday, July 12, the House Committee on Science, Space, and Technology heard from four academic and industry leaders – representatives from Berkeley Lab, Argonne Lab, GE Global Research and Carnegie Mellon University – on the opportunities springing from the intersection of machine learning and advanced-scale computing. Read more…

By Tiffany Trader

HPC Serves as a ‘Rosetta Stone’ for the Information Age

July 12, 2018

In an age defined and transformed by its data, several large-scale scientific instruments around the globe might be viewed as a ‘mother lode’ of precious data. With names seemingly created for a ‘techno-speak’ glossary, these interferometers, cyclotrons, sequencers, solenoids, satellite altimeters, and cryo-electron microscopes are churning out data in previously unthinkable and seemingly incomprehensible quantities -- billions, trillions and quadrillions of bits and bytes of electro-magnetic code. Read more…

By Warren Froelich

Can Markov Logic Take Machine Learning to the Next Level?

July 11, 2018

Advances in machine learning, including deep learning, have propelled artificial intelligence (AI) into the public conscience and forced executives to create new business plans based on data. However, the scarcity of hig Read more…

By Alex Woodie

HPE Extreme Performance Solutions

Introducing the First Integrated System Management Software for HPC Clusters from HPE

How do you manage your complex, growing cluster environments? Answer that big challenge with the new HPC cluster management solution: HPE Performance Cluster Manager. Read more…

IBM Accelerated Insights

ORNL Summit Supercomputer Is Officially Here

Oak Ridge National Laboratory (ORNL) together with IBM and Nvidia celebrated the official unveiling of the Department of Energy (DOE) Summit supercomputer today at an event presided over by DOE Secretary Rick Perry. Read more…

CSIR, Nvidia Partner to Launch GPU-Powered AI Center in India

July 10, 2018

As reported by a number of Indian news outlets, India’s Council of Scientific and Industrial Research (CSIR) is partnering with Nvidia to establish a new, AI-focused Centre of Excellence in New Delhi, India's capital. Read more…

By Oliver Peckham

AI Thought Leaders on Capitol Hill

July 14, 2018

On Thursday, July 12, the House Committee on Science, Space, and Technology heard from four academic and industry leaders – representatives from Berkeley Lab, Argonne Lab, GE Global Research and Carnegie Mellon University – on the opportunities springing from the intersection of machine learning and advanced-scale computing. Read more…

By Tiffany Trader

HPC Serves as a ‘Rosetta Stone’ for the Information Age

July 12, 2018

In an age defined and transformed by its data, several large-scale scientific instruments around the globe might be viewed as a ‘mother lode’ of precious data. With names seemingly created for a ‘techno-speak’ glossary, these interferometers, cyclotrons, sequencers, solenoids, satellite altimeters, and cryo-electron microscopes are churning out data in previously unthinkable and seemingly incomprehensible quantities -- billions, trillions and quadrillions of bits and bytes of electro-magnetic code. Read more…

By Warren Froelich

Tsinghua Powers Through ISC18 Field

July 10, 2018

Tsinghua University topped all other competitors at the ISC18 Student Cluster Competition with an overall score of 88.43 out of 100. This gives Tsinghua their s Read more…

By Dan Olds

HPE, EPFL Launch Blue Brain 5 Supercomputer

July 10, 2018

HPE and the Ecole Polytechnique Federale de Lausannne (EPFL) Blue Brain Project yesterday introduced Blue Brain 5, a new supercomputer built by HPE, which displ Read more…

By John Russell

Pumping New Life into HPC Clusters, the Case for Liquid Cooling

July 10, 2018

High Performance Computing (HPC) faces some daunting challenges in the coming years as traditional, industry-standard systems push the boundaries of data center Read more…

By Scott Tease

Meet the ISC18 Cluster Teams: Up Close & Personal

July 6, 2018

It’s time to meet your ISC18 Student Cluster Competition teams. While I was able to film them live at the ISC show, the trick was finding time to edit the vid Read more…

By Dan Olds

PRACEdays18 Keynote Allan Williams (Australia/NCI): We’re Open for Business Down Under!

July 5, 2018

The University of Ljubljana in Slovenia hosted the third annual EHPCSW18 and fifth annual PRACEdays18 events which opened with a plenary session on May 29, 2018 Read more…

By Elizabeth Leake (STEM-Trek for HPCwire)

HPC Under the Covers: Linpack, Exascale & the Top500

June 28, 2018

HPCers can get painted as a monolithic bunch by outsiders, but internecine disagreements abound over the HPCest of HPC jargon, as was evident at ISC this week. Read more…

By Tiffany Trader

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This