Getting ready for KNL? Take a Lesson from NERSC on Optimizing

By Rob Farber

February 10, 2016

NERSC is beginning to tell the world how to optimize applications to run on the new Intel Xeon Phi processors, code name Knights Landing (KNL), that will boot in self-hosted mode to power over 9,300 nodes of the Cori supercomputer by the summer of 2016. Indeed, general availability of KNL is expected before the end of Q2 making early guidance all the more useful. The recent NERSC presentation at SC15, Early Experiences Optimizing Applications for the CORI Supercomputer, covers tools, techniques, and sources of information that developers can use now to optimize codes for Cori, Intel Xeon, and the new Intel Xeon Phi processors.

This panel presentation provides insights by Katie Antypas (NERSC Scientific Computing and Data Services department head) plus a NERSC panel comprised of Richard Gerber (senior science advisor and group lead of the User Services Group), Doug Doerfler (computer systems engineer), and Brian Austin (staff member Advanced Technologies Group). In particular, Antypas pointed out that, “Cori is a pre-exascale system that will showcase a number of the technologies that we are going to see in eventual exascale systems in the 2020 timeframe,” which means these optimizations have a high likelihood of carrying over to future exascale supercomputers as well.

The NERSC Edison supercomputer is currently the largest system on the NERSC floor. It is a Cray XC30 powered by Intel Xeon processors (formerly code-named Ivy Bridge). Antypas pointed out that, “This system is incredibly popular with users. What NERSC needs to do now is move that workload over to more advanced energy efficient architectures,” meaning the new Intel Xeon Phi processor nodes.

Nersc slide1

Adapting codes to run efficiently on the Cori Intel Xeon Phi processor nodes isn’t a trivial task; NERSC users must ensure their codes can fully exploit the many lightweight cores, the longer vector length AVX-512 instructions, and the deeper memory architecture of the new Intel Xeon Phi processors. For power, heat, and thermal reasons, the many cores on a single KNL chip are based on the Intel Silvermont microarchitecture rather than a high-clock rate Intel Xeon core. Instead, dual-vector units per-core are provided to deliver very high Intel Xeon Phi processor floating-point performance.

NERSC Slide2

The scope of the transition is daunting as over 5,000 users and 700 projects need to be transitioned from the Edison Intel Xeon processors to the new Intel Xeon Phi processors. To run well means that applications must exhibit good MPI scaling, exploit vectorization, and utilize OpenMP or another threading model to increase thread parallelism by an order of magnitude to over 240 threads per processor. Antypas stressed that, “Vectorization is no longer something users can ignore”. She backed this up with the observation that, “Before, vectorization provided a speed increase of 2x – 4x. Now it will be 8x.” In short, users need to focus on (1) exploiting 10x more threads, (2) achieving a possible 8x vectorization speedup, plus (3) efficiently utilizing the deeper memory hierarchy that will be present on the Cori Intel Xeon Phi processor nodes.

To make the migration process manageable, twenty applications teams were selected for the NERSC Application Readiness Program (NESAP). These teams will to work closely with Intel and Cray to prepare their applications to run efficiently on the Intel Xeon Phi processor nodes. The codes span a wide domain of scientific applications as shown below.

Advanced Scientific Computing Research (ASCR):

  • Optimization of the BoxLib Adaptive Mesh Refinement Framework for Scientific Application Codes, PI: Ann Almgren (Lawrence Berkeley National Laboratory)
  • High-Resolution CFD and Transport in Complex Geometries Using Chombo-Crunch, David Trebotich (Lawrence Berkeley National Laboratory

Biological and Environmental Research (BER)

  • CESM Global Climate Modeling, John Dennis (National Center for Atmospheric Research)
  • High-Resolution Global Coupled Climate Simulation Using The Accelerated Climate Model for Energy (ACME), Hans Johansen (Lawrence Berkeley National Laboratory)
  • Multi-Scale Ocean Simulation for Studying Global to Regional Climate Change, Todd Ringler (Los Alamos National Laboratory)
  • Gromacs Molecular Dynamics (MD) Simulation for Bioenergy and Environmental Biosciences, Jeremy C. Smith (Oak Ridge National Laboratory)
  • Meraculous, a Production de novo Genome Assembler for Energy-Related Genomics Problems, Katherine Yelick (Lawrence Berkeley National Laboratory)

Basic Energy Science (BES)

  • Large-Scale Molecular Simulations with NWChem, PI: Eric Jon Bylaska (Pacific Northwest National Laboratory)
  • Parsec: A Scalable Computational Tool for Discovery and Design of Excited State Phenomena in Energy Materials, James Chelikowsky (University of Texas, Austin)
  • BerkeleyGW: Massively Parallel Quasiparticle and Optical Properties Computation for Materials and Nanostructures (Jack Deslippe, NERSC)
  • Materials Science using Quantum Espresso, Paul Kent (Oak Ridge National Laboratory)
  • Large-Scale 3-D Geophysical Inverse Modeling of the Earth, Greg Newman (Lawrence Berkeley National Laboratory)

Fusion Energy Sciences (FES)

  • Understanding Fusion Edge Physics Using the Global Gyrokinetic XGC1 Code, Choong-Seock Chang (Princeton Plasma Physics Laboratory)
  • Addressing Non-Ideal Fusion Plasma Magnetohydrodynamics Using M3D-C1, Stephen Jardin (Princeton Plasma Physics Laboratory)

High Energy Physics (HEP)

  • HACC (Hardware/Hybrid Accelerated Cosmology Code) for Extreme Scale Cosmology, Salman Habib (Argonne National Laboratory)
  • The MILC Code Suite for Quantum Chromodynamics (QCD) Simulation and Analysis, Doug Toussaint (University of Arizona)
  • Advanced Modeling of Particle Accelerators, Jean-Luc Vay, Lawrence Berkeley National Laboratory)

Nuclear Physics (NP)

  • Domain Wall Fermions and Highly Improved Staggered Quarks for Lattice QCD, Norman Christ (Columbia University) and Frithjof Karsch (Brookhaven National Laboratory)
  • Chroma Lattice QCD Code Suite, Balint Joo (Jefferson National Accelerator Facility)
  • Weakly Bound and Resonant States in Light Isotope Chains Using MFDn — Many Fermion Dynamics Nuclear Physics, James Vary and Pieter Maris (Iowa State University)

Fifty application teams applied to the NESAP program, and NERSC selected 20 to partner with most closely to prepare for the Cori KNL architecture. Lessons learned from these 20 application teams will be shared more broadly with the NERSC user community. As it turns out, the NERSC workload is highly diverse, but extremely concentrated in about 20-30 application codes. In fact the top 25 applications make up about 2/3rd of the NERSC workload. With the exception of a few emerging applications, the 20 NESAP teams come from this top tier and thus represent the NERSC workload.

Antypas acknowledges that some applications will have an easier time being ported to the Knights Landing architectures than others. Such codes are generally amenable to thread and vector optimizations. She said other codes with very flat execution profiles “Might take years to optimize.” Further, codes with strong serial runtime components can be problematic as the serial section may need to be parallelized over threads and/or vector operations. Such parallelization can be difficult.

Starting the transition now to Intel Xeon Phi code name Knights Landing

An excellent discussion occurred amongst the panel members about how to transition applications to the Cori Intel Xeon Phi processor nodes – even though the Intel Xeon Phi processor nodes are not scheduled to be operational until summer 2016.

Overall, the NERSC team is excited about the on-package high-bandwidth MCDRAM near-memory memory shown in the graphic below. Typically NERSC applications are limited by memory bandwidth not flop/s.

The challenge with deep memory on the Cori Intel Xeon Phi processor nodes is the small 16 GB MCDRAM memory capacity compared to the 100 GB of DDR far-memory memory.  To get the best performance, users will need to determine which arrays benefit most from the fast on-package MCDRAM memory. Fortunately, users can also access the MCDRAM via a ‘cache-mode’ which does not require any changes to a code. In the coming months NERSC staff and NESAP teams will be exploring the different memory modes and the performance and usability trade-offs between ‘flat-mode’ and ‘cache-mode’.

NERSC Slide.3

Doerfler pointed out that a primary benefit of transitioning applications to KNL is that developers have to think harder about vector and thread level parallelism. The result is codes that run faster on both Intel Xeon Phi and Intel Xeon processors. Doerfler also noted, “The transition to multilevel memory is important to exascale because developers are thinking about data locality.”

As for optimizing codes right now, Doerfler likes Intel VTune Analyzer to profile application performance but he has found that the Intel Software Development Emulator (or Intel SDE) can be very illuminating. The more he uses it the more excited he gets by it. Succinctly he said, “I encourage people to go out and use it”.

Austin told the audience that if you can run well on the older Intel Xeon Phi processors then you are well prepared to run on the Cori nodes. The performance key is to ensure that your MPI code can utilize threads on each MPI client to exploit the vector and thread-parallel features of the Intel Xeon Phi processors.

Gerber stressed that anyone who wishes to prepare for Cori should talk to the Intel Xeon Phi User’s Group (IXPUG), which is an independent user group whose mission is to provide a forum for the free exchange of information to improve the efficiency and usability of HPC applications running on large Intel Xeon Phi processor based HPC systems. Gerber pointed out that the IXPUG five minute lighting talks, “Tales from the trenches” are a great source of information about lessons learned for performance tricks, tools, and other useful Intel Xeon Phi processor performance information. IXPUG has proven to be wildly popular with numerous events happening all around the world. For example, Gerber noted that over 150 people showed up at the Berkeley meeting, which filled the room beyond capacity and prevented some from attending.

Until we have operational Cori nodes, the NERSC panel suggests using the following hardware proxies based on the characteristics of the application. However, the key point is simply to develop hybrid MPI/threaded applications code and then start running when hardware is available.

  • Lots of threads: run on Intel Xeon Phi processor.
  • Lots of vectorization: use Intel Xeon Phi processor but groups can do a lot of good vectorization work on Intel Xeon processors.
  • Investigate memory depth: there is no good proxy for memory depth until the Knight Landing processor is available, but groups can use memkind (a user extensible heap manager for heterogeneous and mixed memory platforms), but the speed differentials are not that great on the current generation of hardware.

In conclusion, the NERSC team pointed out that they have some great application case studies on the NERSC website. NERSC is really trying to educate the community.

View the video of the entire panel discussion.

For more information:

IXPUG: https://www.ixpug.org

Cori application porting: http://www.nersc.gov/users/computational-systems/cori/application-porting-and-performance/

About the Author

Rob Farber is a global technology consultant and author with an extensive background in HPC and a long history of working with national labs and corporations engaged in both HPC and enterprise computing. He can be reached at [email protected] DOT com.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Cerebras Doubles AI Performance with Second-Gen 7nm Wafer Scale Engine

April 20, 2021

Nearly two years since its massive 1.2 trillion transistor Wafer Scale Engine chip debuted at Hot Chips, Cerebras is announcing its second-generation technology (WSE-2), which its says packs twice the performance into th Read more…

The New Scalability

April 20, 2021

HPC is all about scalability. The most powerful systems. The biggest data sets. The most cores, the most bytes, the most flops, the most bandwidth. HPC scales! Notwithstanding a few recurring arguments over the last twenty years about scaling up versus scaling out, the definition of scalability... Read more…

Supercomputer-Powered Climate Model Makes Startling Sea Level Rise Prediction

April 19, 2021

The climate science community is tasked with striking a difficult balance: inspiring precisely the amount of alarm commensurate to the climate crisis. Make estimates that are too conservative, and the public might not re Read more…

San Diego Supercomputer Center Opens ‘Expanse’ to Industry Users

April 15, 2021

When San Diego Supercomputer Center (SDSC) at the University of California San Diego was getting ready to deploy its flagship Expanse supercomputer for the large research community it supports, it also sought to optimize Read more…

GTC21: Dell Building Cloud Native Supercomputers at U Cambridge and Durham

April 14, 2021

In conjunction with GTC21, Dell Technologies today announced new supercomputers at universities across DiRAC (Distributed Research utilizing Advanced Computing) in the UK with plans to explore use of Nvidia BlueField DPU technology. The University of Cambridge will expand... Read more…

AWS Solution Channel

Research computing with RONIN on AWS

To allow more visibility into and management of Amazon Web Services (AWS) resources and expenses and minimize the cloud skills training required to operate these resources, AWS Partner RONIN created the RONIN research computing platform. Read more…

The Role and Potential of CPUs in Deep Learning

April 14, 2021

Deep learning (DL) applications have unique architectural characteristics and efficiency requirements. Hence, the choice of computing system has a profound impact on how large a piece of the DL pie a user can finally enj Read more…

Cerebras Doubles AI Performance with Second-Gen 7nm Wafer Scale Engine

April 20, 2021

Nearly two years since its massive 1.2 trillion transistor Wafer Scale Engine chip debuted at Hot Chips, Cerebras is announcing its second-generation technology Read more…

The New Scalability

April 20, 2021

HPC is all about scalability. The most powerful systems. The biggest data sets. The most cores, the most bytes, the most flops, the most bandwidth. HPC scales! Notwithstanding a few recurring arguments over the last twenty years about scaling up versus scaling out, the definition of scalability... Read more…

San Diego Supercomputer Center Opens ‘Expanse’ to Industry Users

April 15, 2021

When San Diego Supercomputer Center (SDSC) at the University of California San Diego was getting ready to deploy its flagship Expanse supercomputer for the larg Read more…

GTC21: Dell Building Cloud Native Supercomputers at U Cambridge and Durham

April 14, 2021

In conjunction with GTC21, Dell Technologies today announced new supercomputers at universities across DiRAC (Distributed Research utilizing Advanced Computing) in the UK with plans to explore use of Nvidia BlueField DPU technology. The University of Cambridge will expand... Read more…

The Role and Potential of CPUs in Deep Learning

April 14, 2021

Deep learning (DL) applications have unique architectural characteristics and efficiency requirements. Hence, the choice of computing system has a profound impa Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

Nvidia Aims Clara Healthcare at Drug Discovery, Imaging via DGX

April 12, 2021

Nvidia Corp. continues to expand its Clara healthcare platform with the addition of computational drug discovery and medical imaging tools based on its DGX A100 platform, related InfiniBand networking and its AGX developer kit. The Clara partnerships announced during... Read more…

Nvidia Serves Up Its First Arm Datacenter CPU ‘Grace’ During Kitchen Keynote

April 12, 2021

Today at Nvidia’s annual spring GPU Technology Conference (GTC), held virtually once more due to the pandemic, the company unveiled its first ever Arm-based CPU, called Grace in honor of the famous American programmer Grace Hopper. The announcement of the new... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2021) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective Read more…

Microsoft, HPE Bringing AI, Edge, Cloud to Earth Orbit in Preparation for Mars Missions

February 12, 2021

The International Space Station will soon get a delivery of powerful AI, edge and cloud computing tools from HPE and Microsoft Azure to expand technology experi Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire