NERSC Cori Shows the World How Many-Cores for the Masses Works

By Rob Farber

April 21, 2017

As its mission, the high performance computing center for the U.S. Department of Energy Office of Science, NERSC (the National Energy Research Supercomputer Center), supports a broad spectrum of forefront scientific research across diverse areas that includes climate, material science, chemistry, fusion energy, high-energy physics and many others.

“We have about 6,000 users – with 700 different codes – who are doing research across all fields of interest to the Office of Science and we support them all,” said Richard Gerber, NERSC HPC department head and senior science advisor. “That means that all our users and all their codes have to run, and run well, on our systems. One of our challenges is to get our entire workload to run efficiently and effectively on next-generation supercomputers. This goal has become known as ‘Many core for the masses,’ and that’s what we will be spending a lot of time working on in the upcoming year.”

By definition then, many-cores for the masses at NERSC includes getting all the Office of Science applications running on NERSC’s new Cori supercomputer with 9,300 Intel Xeon Phi processors (formerly known as Knights Landing or KNL) and 1,900 Intel Xeon compute nodes.

“Cori is NERSC’s first manycore system and is on the path to exascale,” Gerber continued. “In particular it’s the first system where single-thread performance may be lower than single-thread performance on the previous system. This presents a real challenge for some users.” The Cori supercomputer also presents a deeper memory/storage hierarchy from the Intel Xeon Phi processor on-package MCDRAM, to DDR, to a burst buffer flash storage layer and all the way through to the Lustre file system.

In preparing for Cori over the past two years, the NERSC team launched NESAP (the NERSC Exascale Science Applications Program), which is a collaborative effort where NERSC partners with code teams, library and tools developers, Intel, Cray and the HPC community to prepare for the Cori many-core architecture. Twenty projects were selected for NESAP based on computational and scientific reviews by NERSC and other DOE staff. These projects represent about half of the runtime hours utilized on the NERSC supercomputers.

Figure 1: NESAP activities

The idea is to provide training for staff and postdocs and apply the lessons learned to the broad NERSC user community. These lessons are also widely applicable to the general Intel Xeon and Intel Xeon Phi processors user community. “As we learn things, a big part of our strategy is to take that knowledge and spread it out to the community – the community of our 6,000 users but also the worldwide community,” Gerber pointed out in the NERSC talk at the recent Intel HPC Developer conference, Many Cores for the Masses: Lessons Learned from Application Readiness Efforts at NERSC for the Knights Landing Based Cori System.

Jack Deslippe, who leads the NESAP effort and the NERSC Application Performance Group, reiterated the point that, “Cori represents the first machine that NERSC has procured where doing nothing means that a user’s code can actually run slower on the new system node-per-node.” That is why the NESAP program is an “all hands on deck” effort to work at a much deeper level with user code than the NERSC has done before. “This effort has touched every group at NERSC,” he said, “and has created a level of collaboration with Cray and Intel engineers on apps that has never occurred at the center before.”

Optimization for the Masses

When talking to scientists and users, the NERSC team likens the optimization process to that of an ant farm — an analogy that has become popular, no doubt, due to its silliness. Deslippe noted in an SC16 talk “This sort of out-of-the-box thinking that gives you a promotion at Berkeley,” which garnered a hearty laugh from the audience. The truth as reflected by the ant hill model (shown below) is that optimizing code is not always a straightforward process. In particular, Deslippe observed that, “it is easy to get lost in the weeds” – especially with Intel Xeon Phi processors due to the wealth of new architectural features on these devices that a programmer might want to target.

Figure 2: How to talk to the masses about optimizing codes for Cori

Profiling your code is, “like a lawnmower that constantly finds and knocks down the next tallest blade of grass” he said, an analogy to optimizing the next section of code that consumes the greatest amount of runtime. The programmers then take the code section away for investigation. In order to bring order to the ant farm, NERSC has employed the use of the roofline model, which tells the programmer not only how much they are improving the code according to an absolute measure of performance (shown on the y-axis below), but it also tells them which architectural features might help. The positions of code performance relative to the ceilings in the model show where the potential performance gains can be achieved be it via vectorization (AVX), code restructuring via Instruction Level Parallelism (ILP) or focusing on efficient use of the High Bandwidth memory (HBM).

Figure 3: The roofline model is a valuable optimization tool

The ability to easily collect accurate roofline performance data is the result of collaboration between Intel and NERSC staff. (See http://www.nersc.gov/users/application-performance/measuring-arithmetic-intensity for more information.) The NERSC team is actively working with Intel on the co-design of performance tools in the Intel Advisor utility that now includes the roofline model. Early access can be found here.

Early CORI Intel Xeon Phi processor single node results

Early single Intel Xeon Phi processor node results show excellent speedups on the NESAP codes with a maximum speedup of 13x for the BerkeleyGW package, a set of computer codes written at Berkeley that calculate the quasiparticle properties and optical responses for a large variety of materials.

The optimization process of one of the kernels (Kernel-C) utilized the roofline model and the performance impact of six optimization steps is shown below. Note that the optimization process also delivered significant performance increases on the Intel Xeon processors as well.

Figure 4

Overall the NESAP optimization process delivered significant increases in performance on both the Intel Xeon and Intel Xeon Phi processor Cori computational nodes. Intel Xeon processor results are shown in orange below and the Intel Xeon Phi processor results are shown in blue. In most cases, the speedup was greater on Intel Xeon Phi processors than on the Intel Xeon processors. Doug Doeffler noted that, “Haswell tends to be more forgiving of unoptimized code.” The Boxlib code is one exception because it started as a bandwidth limited code that fit into the Intel Xeon Phi processor MCDRAM memory.

Figure 5

In general, the MCDRAM system benefitted most of the NESAP applications.

Figure 6: NESAP performance improvements attributed to the MCDRAM memory system

Early Cori Scaling Studies

Cori contains a large number of computational nodes, so scaling is a key factor in efficiently utilizing the machine. Of concern is the observation that the Intel Xeon Phi computational nodes deliver roughly one-third the sequential performance of a Haswell/Broadwell Xeon core. However this lower performance core must support both the application and much of the communication stack and processing of the MPI communications calls.

Summarized in the following graphic, NERSC has found that Cori shows performance improvements at all scales and decompositions.

Figure 7: The Cori supercomputer shows scaling speedups at all scales

Summary

In a majority of the reporting NESAP codes and kernels, single node runs on the Intel Xeon Phi nodes outperformed single node runs on the Intel Xeon processor (Haswell) nodes. However, the superior Intel Xeon Phi processor performance only happened after optimization guided by the roofline model.

About the Author

Rob Farber is a global technology consultant and author with an extensive background in HPC and in developing machine learning technology that he applies at national labs and commercial organizations. He was also the editor of Parallel Programming with OpenACC. Rob can be reached at [email protected]

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Supercomputing Helps Explain the Milky Way’s Shape

September 30, 2022

If you look at the Milky Way from “above,” it almost looks like a cat’s eye: a circle of spiral arms with an oval “iris” in the middle. That iris — a starry bar that connects the spiral arms — has two stran Read more…

Top Supercomputers to Shake Up Earthquake Modeling

September 29, 2022

Two DOE-funded projects — and a bunch of top supercomputers — are converging to improve our understanding of earthquakes and enable the construction of more earthquake-resilient buildings and infrastructure. The firs Read more…

How Intel Plans to Rebuild Its Manufacturing Supply Chain

September 29, 2022

Intel's engineering roots saw a revival at this week's Innovation, with attendees recalling the show’s resemblance to Intel Developer Forum, the company's annual developer gala last held in 2016. The chipmaker cut t Read more…

Intel Labs Launches Neuromorphic ‘Kapoho Point’ Board

September 28, 2022

Over the past five years, Intel has been iterating on its neuromorphic chips and systems, aiming to create devices (and software for those devices) that closely mimic the behavior of the human brain through the use of co Read more…

DOE Announces $42M ‘COOLERCHIPS’ Datacenter Cooling Program

September 28, 2022

With massive machines like Frontier guzzling tens of megawatts of power to operate, datacenters’ energy use is of increasing concern for supercomputer operations – and particularly for the U.S. Department of Energy ( Read more…

AWS Solution Channel

Shutterstock 1818499862

Rearchitecting AWS Batch managed services to leverage AWS Fargate

AWS service teams continuously improve the underlying infrastructure and operations of managed services, and AWS Batch is no exception. The AWS Batch team recently moved most of their job scheduler fleet to a serverless infrastructure model leveraging AWS Fargate. Read more…

Microsoft/NVIDIA Solution Channel

Shutterstock 1166887495

Improving Insurance Fraud Detection using AI Running on Cloud-based GPU-Accelerated Systems

Insurance is a highly regulated industry that is evolving as the industry faces changing customer expectations, massive amounts of data, and increased regulations. A major issue facing the industry is tracking insurance fraud. Read more…

Do You Believe in Science? Take the HPC Covid Safety Pledge

September 28, 2022

ISC 2022 was back in person, and the celebration was on. Frontier had been named the first exascale supercomputer on the Top500 list, and workshops, poster sessions, paper presentations, receptions, and booth meetings we Read more…

How Intel Plans to Rebuild Its Manufacturing Supply Chain

September 29, 2022

Intel's engineering roots saw a revival at this week's Innovation, with attendees recalling the show’s resemblance to Intel Developer Forum, the company's ann Read more…

Intel Labs Launches Neuromorphic ‘Kapoho Point’ Board

September 28, 2022

Over the past five years, Intel has been iterating on its neuromorphic chips and systems, aiming to create devices (and software for those devices) that closely Read more…

HPE to Build 100+ Petaflops Shaheen III Supercomputer

September 27, 2022

The King Abdullah University of Science and Technology (KAUST) in Saudi Arabia has announced that HPE has won the bid to build the Shaheen III supercomputer. Sh Read more…

Intel’s New Programmable Chips Next Year to Replace Aging Products

September 27, 2022

Intel shared its latest roadmap of programmable chips, and doesn't want to dig itself into a hole by following AMD's strategy in the area.  "We're thankfully not matching their strategy," said Shannon Poulin, corporate vice president for the datacenter and AI group at Intel, in response to a question posed by HPCwire during a press briefing. The updated roadmap pieces together Intel's strategy for FPGAs... Read more…

Intel Ships Sapphire Rapids – to Its Cloud

September 27, 2022

Intel has had trouble getting its chips in the hands of customers on time, but is providing the next best thing – to try out those chips in the cloud. Delayed chips such as Sapphire Rapids server processors and Habana Gaudi 2 AI chip will be available on a platform called the Intel Developer Cloud, which was announced at the Intel Innovation event being held in San Jose, California. Read more…

More Details on ‘Half-Exaflop’ Horizon System, LCCF Emerge

September 26, 2022

Since 2017, plans for the Leadership-Class Computing Facility (LCCF) have been underway. Slated for full operation somewhere around 2026, the LCCF’s scope ext Read more…

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

Nvidia Introduces New Ada Lovelace GPU Architecture, OVX Systems, Omniverse Cloud

September 20, 2022

In his GTC keynote today, Nvidia CEO Jensen Huang launched another new Nvidia GPU architecture: Ada Lovelace, named for the legendary mathematician regarded as Read more…

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

AWS Takes the Short and Long View of Quantum Computing

August 30, 2022

It is perhaps not surprising that the big cloud providers – a poor term really – have jumped into quantum computing. Amazon, Microsoft Azure, Google, and th Read more…

US Senate Passes CHIPS Act Temperature Check, but Challenges Linger

July 19, 2022

The U.S. Senate on Tuesday passed a major hurdle that will open up close to $52 billion in grants for the semiconductor industry to boost manufacturing, supply chain and research and development. U.S. senators voted 64-34 in favor of advancing the CHIPS Act, which sets the stage for the final consideration... Read more…

Chinese Startup Biren Details BR100 GPU

August 22, 2022

Amid the high-performance GPU turf tussle between AMD and Nvidia (and soon, Intel), a new, China-based player is emerging: Biren Technology, founded in 2019 and headquartered in Shanghai. At Hot Chips 34, Biren co-founder and president Lingjie Xu and Biren CTO Mike Hong took the (virtual) stage to detail the company’s inaugural product: the Biren BR100 general-purpose GPU (GPGPU). “It is my honor to present... Read more…

Newly-Observed Higgs Mode Holds Promise in Quantum Computing

June 8, 2022

The first-ever appearance of a previously undetectable quantum excitation known as the axial Higgs mode – exciting in its own right – also holds promise for developing and manipulating higher temperature quantum materials... Read more…

AMD’s MI300 APUs to Power Exascale El Capitan Supercomputer

June 21, 2022

Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Read more…

Tesla Bulks Up Its GPU-Powered AI Super – Is Dojo Next?

August 16, 2022

Tesla has revealed that its biggest in-house AI supercomputer – which we wrote about last year – now has a total of 7,360 A100 GPUs, a nearly 28 percent uplift from its previous total of 5,760 GPUs. That’s enough GPU oomph for a top seven spot on the Top500, although the tech company best known for its electric vehicles has not publicly benchmarked the system. If it had, it would... Read more…

Exclusive Inside Look at First US Exascale Supercomputer

July 1, 2022

HPCwire takes you inside the Frontier datacenter at DOE's Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tenn., for an interview with Frontier Project Direc Read more…

Leading Solution Providers

Contributors

AMD Opens Up Chip Design to the Outside for Custom Future

June 15, 2022

AMD is getting personal with chips as it sets sail to make products more to the liking of its customers. The chipmaker detailed a modular chip future in which customers can mix and match non-AMD processors in a custom chip package. "We are focused on making it easier to implement chips with more flexibility," said Mark Papermaster, chief technology officer at AMD during the analyst day meeting late last week. Read more…

Nvidia, Intel to Power Atos-Built MareNostrum 5 Supercomputer

June 16, 2022

The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Inte Read more…

UCIe Consortium Incorporates, Nvidia and Alibaba Round Out Board

August 2, 2022

The Universal Chiplet Interconnect Express (UCIe) consortium is moving ahead with its effort to standardize a universal interconnect at the package level. The c Read more…

Using Exascale Supercomputers to Make Clean Fusion Energy Possible

September 2, 2022

Fusion, the nuclear reaction that powers the Sun and the stars, has incredible potential as a source of safe, carbon-free and essentially limitless energy. But Read more…

Is Time Running Out for Compromise on America COMPETES/USICA Act?

June 22, 2022

You may recall that efforts proposed in 2020 to remake the National Science Foundation (Endless Frontier Act) have since expanded and morphed into two gigantic bills, the America COMPETES Act in the U.S. House of Representatives and the U.S. Innovation and Competition Act in the U.S. Senate. So far, efforts to reconcile the two pieces of legislation have snagged and recent reports... Read more…

Nvidia, Qualcomm Shine in MLPerf Inference; Intel’s Sapphire Rapids Makes an Appearance.

September 8, 2022

The steady maturation of MLCommons/MLPerf as an AI benchmarking tool was apparent in today’s release of MLPerf v2.1 Inference results. Twenty-one organization Read more…

Not Just Cash for Chips – The New Chips and Science Act Boosts NSF, DOE, NIST

August 3, 2022

After two-plus years of contentious debate, several different names, and final passage by the House (243-187) and Senate (64-33) last week, the Chips and Science Act will soon become law. Besides the $54.2 billion provided to boost US-based chip manufacturing, the act reshapes US science policy in meaningful ways. NSF’s proposed budget... Read more…

India Launches Petascale ‘PARAM Ganga’ Supercomputer

March 8, 2022

Just a couple of weeks ago, the Indian government promised that it had five HPC systems in the final stages of installation and would launch nine new supercomputers this year. Now, it appears to be making good on that promise: the country’s National Supercomputing Mission (NSM) has announced the deployment of “PARAM Ganga” petascale supercomputer at Indian Institute of Technology (IIT)... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire