SC13 Research Highlight: Petascale DNS of Turbulent Channel Flow

By Myoungkyu Lee, Nicholas Malaya, and Robert Moser

November 15, 2013

Whether a car on the highway, a plane flying through the air, or a ship in the ocean, all of these transport systems move through fluids. And in nearly all cases, the fluid flowing around these vehicles will be turbulent.

With over 20% of global energy consumption expended on transportation, the large fraction of the energy expended in moving goods and people that is mediated by wall-bounded turbulence is a significant component of the nation’s energy budget. However, despite the energy impact, scientists do not possess a sufficiently detailed understanding of the physics of turbulent flows to permit reliable predictions of the lift or drag of these system.

In order to probe the physics of wall-bounded turbulent flows, a team of scientists at the University of Texas are conducting the largest ever Direct Numerical Simulation (DNS) of wall-bounded turbulence at Ret = 5200. With 242 billion degrees of freedom, this simulation is fifteen times larger than the previously largest channel DNS of Hoyas and Jimenez, conducted in 2006.

In a DNS of turbulence, the equations of fluid motion (the Navier-Stokes equations) are solved, without any modeling, at sufficient resolution to represent all the scales of turbulence. In general, the full three-dimensional data fields of turbulent flow are difficult to obtain experimentally. On the other hand, computer simulations provide exquisitely detailed and highly reliable data, which have driven a number of discoveries regarding the nature of wall-bounded turbulence.

However, the use of DNS to study high speed flows has been hindered by the significant computational expense of the simulations. Resolving all the essential scales of turbulence introduces enormous computational and memory requirements, requiring DNS to be performed on the largest supercomputers. For this reason, DNS is a challenging HPC problem, and is a commonly used application to evaluate the performance of Top-500 systems. Due to the great expense of running a DNS, improving efficiencies in computation allows the simulation of more realistic scenarios (higher Reynolds numbers and larger domains) than would otherwise be possible.

Vortex_structure
Vortex visualization of turbulent flow

M.K.(Myoungkyu) Lee, the lead developer of the new DNS code used in the simulations, will present the results of numerous software optimizations during the Extreme-Scale Applications Session at SC13, on Tuesday, Nov 19th, 1:30PM – 2:00PM. The presentation will detail scaling results across a variety of Top-500 platforms, such as the Texas Advanced Computing Center’s Lonestar and Stampede, the National Center for Supercomputing Applications’ Blue Waters, and Argonne Leadership Computing Facility’s Blue Gene/Q Mira, where the full scientific simulation was conducted.

The results demonstrate that performance is highly dependent on characteristics of the communication network and memory bandwidth, rather than single core performance. On Blue Gene/Q, for instance, the code exhibits approximately 80% strong scaling parallel efficiency at 786K cores relative to performance on 65K cores. The largest benchmark case uses 2.3 trillion grid points and the corresponding memory requirement is 130 Terabytes.

The code was developed using Fourier spectral methods, which are typically preferred for turbulence DNS because of the superior resolution properties, despite the resulting algorithmic need for expensive communication. Optimization was performed to address several major issues: efficiency of banded matrix linear algebra, cache reuse and memory access, threading efficiency and communication for the global data transposes.

A special linear algebra solver was developed, based on a custom matrix data structure in which non-zero elements are moved to otherwise empty elements, reducing the memory requirement by half, which is important for cache management. In addition, it is found that compilers inefficiently optimized the low-level operations on matrix elements for the LU decomposition. As a result, loops were unrolled by hand to improve reuse of data in cache.

Streamwise velocity (sides) and wall-shear stress (top) of turbulent flow between two parallel plates
Streamwise velocity (sides) and wall-shear stress (top) of turbulent flow between two parallel plates

FFTs, on-node data reordering and the time advance were all threaded using OpenMP to enhance single-node performance. These were very effective, with the code demonstrating nearly perfect OpenMP scalability (99%).

The talk will also discuss how replacing the existing library for 3D global Fast Fourier Transforms (P3DFFT) with a new library developed using the FFTW 3.3-MPI library and lead to substantially improved communication performance.

The full scientific simulation used 300 million core hours on ALCF’s BG/Q Mira from the Department of Energy Early Science Program and the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) 2013 Program. Each restart file generated by the simulation is 1.8 TB in size, with approximately eighty such files archived for long term postprocessing and investigation. Postprocessing this large an amount of data is also a supercomputing challenge.

Presentation Information

Title : Petascale Direct Numerical Simulation of Turbulent Channel Flow on up to 786K Cores

Location : Room 201/203

Session : Extreme-Scale Applications

Time :  Tuesday, Nov 19th, 1:30PM – 2:00PM

Presenter : M.K.(Myoungkyu) Lee

SC13 Scheduler : http://sc13.supercomputing.org/schedule/event_detail.php?evid=pap689

About

M.K. (Myoungkyu) Lee is a Ph.D student in Department of Mechanical Engineering at the University of Texas at Austin.
mk@ices.utexas.edu

Nicholas Malaya is a researcher in the Center for Predictive Engineering and Computational Sciences (PECOS) within the Institute for Computational Engineering and Sciences (ICES) at The University of Texas at Austin.
nick@ices.utexas.edu

Robert D. Moser holds the W. A. “Tex” Moncrief Jr. Chair in Computational Engineering and Sciences and is professor of mechanical engineering in thermal fluid systems. He serves as the director of the ICES Center for Predictive Engineering and Computational Sciences (PECOS) and deputy director of the Institute for Computational Engineering and Sciences(ICES).
rmoser@ices.utexas.edu

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

URISC@SC17 and the #LongestLastMile

January 11, 2018

A multinational delegation recently attended the Understanding Risk in Shared CyberEcosystems workshop, or URISC@SC17, in Denver, Colorado. URISC participants and presenters from 11 countries, including eight African nations, 12 U.S. states, Canada, India and Nepal, also attended SC17, the annual international conference for high performance computing, networking, storage and analysis that drew nearly 13,000 attendees. Read more…

By Elizabeth Leake, STEM-Trek Nonprofit

When the Chips Are Down

January 11, 2018

In the last article, "The High Stakes Semiconductor Game that Drives HPC Diversity," I alluded to the challenges facing the semiconductor industry and how that may impact the evolution of HPC systems over the next few years. I thought I’d lift the covers a little and look at some of the commercial challenges that impact the component technology we use in HPC. Read more…

By Dairsie Latimer

Intel, Micron to Go Their Separate 3D NAND Ways

January 10, 2018

The announcement on Monday (Jan. 8) that Intel and Micron have decided to “update” – that is, end – their long-term joint development partnership for 3D NAND technology is nearly as interesting an exercise in pub Read more…

By Doug Black

HPE Extreme Performance Solutions

The Living Heart Project Wins Three Prestigious Awards for HPC Simulation

Imagine creating a treatment plan for a patient on the other side of the world, or testing a drug without ever putting subjects at risk. Read more…

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect application performance by 10-30 percent. The patch makes any call fro Read more…

By Rosemary Francis

When the Chips Are Down

January 11, 2018

In the last article, "The High Stakes Semiconductor Game that Drives HPC Diversity," I alluded to the challenges facing the semiconductor industry and how that may impact the evolution of HPC systems over the next few years. I thought I’d lift the covers a little and look at some of the commercial challenges that impact the component technology we use in HPC. Read more…

By Dairsie Latimer

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

ANL’s Rick Stevens on CANDLE, ARM, Quantum, and More

January 8, 2018

Late last year HPCwire caught up with Rick Stevens, associate laboratory director for computing, environment and life Sciences at Argonne National Laboratory, f Read more…

By John Russell

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

The @hpcnotes Predictions for HPC in 2018

January 4, 2018

I’m not averse to making predictions about the world of High Performance Computing (and Supercomputing, Cloud, etc.) in person at conferences, meetings, causa Read more…

By Andrew Jones

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Independent Hyperion Research Will Chart its Own Course

December 19, 2017

Hyperion Research, formerly the HPC research and consulting practice within IDC, has become an independent company with Earl Joseph, the long-time leader of the Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Leading Solution Providers

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Nvidia, Partners Announce Several V100 Servers

September 27, 2017

Here come the Volta 100-based servers. Nvidia today announced an impressive line-up of servers from major partners – Dell EMC, Hewlett Packard Enterprise, IBM Read more…

By John Russell

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This