PEARC21 Panel: Wafer-Scale-Engine Technology Accelerates Machine Learning, HPC

By Ken Chiacchia, Pittsburgh Supercomputing Center/XSEDE

July 21, 2021

Early use of Cerebras’ CS-1 server and wafer-scale engine (WSE) has demonstrated promising acceleration of machine-learning algorithms, according to participants in the Scientific Research Enabled by CS-1 Systems panel, presented at the PEARC21 conference. The panel, which for the first time brought together leading teams employing the CS-1 at the Pittsburgh Supercomputing Center (PSC), Argonne National Laboratory (ANL) and Lawrence Livermore National Laboratory (LLNL) charted out the promise of the technology as well as the next steps in applying it to artificial intelligence and HPC projects that do not utilize AI.

The PEARC conference series provides a forum for discussing challenges, opportunities and solutions among the broad range of participants in the research computing community. This community-driven effort builds on successes of the past, and aims to grow and be more inclusive by involving additional local, regional, national and international cyberinfrastructure and research computing partners spanning academia, government and industry. PEARC21, Evolution Across All Dimensions, was offered this year as a virtual event (July 19-22).

Cerebras Systems Technology Summary and Outlook

Moderated by co-organizer Sergiu Sanielevici, director of user support at PSC, the panel kicked off with a presentation by co-organizer Natalia Vassilieva, director of product at Cerebras.

While the field has progressed phenomenally over the past decade, these advances have come at a computational cost, Vassilieva explained. Ballooning memory requirements for training have driven proportional increases in required petaflops-days, with OpenAI’s GPT-3 language model requiring about 116 days to train on 1,024 Nvidia V100 GPUs.

“Modern models need much more compute than can be [fit] on a single processor,” she said, and scaling for distributed training is far from ideal. “As you scale out to multiple devices, at some point you start to observe communication bottlenecks” and other limitations. “We need more compute per device, and the ability to rely less on data parallel training.”

“I think everybody understands that the current approach…is not sustainable,” she said.

The CS-1, and the newly introduced second-generation WSE CS-2, contain 400,000 and 850,000 cores per WSE, providing 18 or 40 GB of memory, respectively, one cycle away from the compute element – a memory bandwidth of 9 or 20 PByte/s. The extreme bandwidth within the chip (100 or 220 Pbit/s) also avoids a bottleneck in conventional systems. By harnessing unprecedented local memory and compute cores, the WSEs offers computational scaling with vastly reduced bottlenecks. The system represents a flexible and dynamic solution to the challenges of fine-grained sparsity and conditional and dynamic machine-learning techniques, Vassilieva said.

The Promise of CS-1/WSE for Research in Science and Engineering

Paola Buitrago, co-organizer and director of AI & Big Data at PSC and principal investigator of the center’s CS-1 based Neocortex, surveyed the state of the art in machine learning – and the challenges the field currently faces. Neocortex explores a slightly different approach to leveraging the CS-1 than other deployments, using an HPE SuperDome Flex server as a single, high-memory CPU intermediary with the user and federation with PSC’s larger Bridges-2 system. The unique high-memory configuration is intended to offer advantages in combined Big Data/AI applications.

Improvements in neural language models’ performance have required more compute and more memory, with the number of parameters of recent transformer-type networks surpassing hundreds of billions or trillions. Generative adversarial networks, domain adaptation and reinforcement learning approaches have also added complexity, Buitrago said.

Nor is the expense of improved ML models limited to computation, she explained. An analysis based on information released by Google estimated the monetary cost of training the 175-billion parameter GPT-3 at about $10 million. At that scale, reducing ImageNet error rate from 11.5 percent to 1 percent would represent $100 billion billion – $1020 – she added.

“As models increase in size and the compute requirements increase, [we] also find that to further improve the models’ performance … becomes prohibitive with existing approaches,” Buitrago said. “The field is calling for a change in paradigm…CS-1, certainly as it was conceived, proposes a different approach to machine-learning training” by offering ways around the compute and memory limitations of current systems when used for AI training.

Scientific ML on Disaggregated Cognitive Simulation HPC Platforms

On behalf of Brian Van Essen, informatics group leader at LLNL, Vassilieva presented the Livermore group’s federation of the Lassen massively parallel compute cluster with a CS-1 WSE. The goal, she said, is to introduce machine learning steps into traditional simulations in an intimate and iterative way that speeds attainment of accuracy. The team is using inertial confinement fusion at LLNL’s National Ignition Facility as a testbed for the approach, called “cognitive simulation.”

Simulations must simplify natural phenomena to bring the computational burden down to a manageable level. Often, this means that their predictions don’t match experimental findings. Cognitive simulation improves a simulation’s accuracy using machine learning at different levels of a simulation job.

At the “in the loop” level, ML inferences are made at every time step of the computation. The “on the loop” level consists of ML training or inference at every ~1,000 time steps. “Around the loop” training or inference happens with each simulation. Finally, with the addition of experimental data, “outside the loop” transfer learning occurs every ~10,000 simulations. The combination allows frequent training and potentially very-high-frequency inference to accelerate the simulation. The approach leverages vast quantities of data generated by the simulations, couples simulations with experimental results and provides more accurate predictions for complex multi-physics nature of ICF than possible with traditional simulation-only modeling.

Stream-AI-MD: Streaming AI-Driven Adaptive Molecular Simulations for Heterogeneous Computing Platforms

Arvind Ramanathan of ANL and the University of Chicago presented an application of machine learning to another traditional HPC domain, that of molecular simulations.

“The general idea is we want to implement…machine learning training on the fly, as simulations are running,” he said.

Pursued by traditional HPC means, multiscale simulations, for example those of spike-protein dynamics in the SARS-CoV-2 viral particle, can generate hundreds of terabytes of data. The visualization task is huge, he said: “It’s humanly impossible to peek into biologically interesting events.”

Inserting an iterative, ML-driven loop between successive simulations and analytics has proved a promising means of refining model results, predicting folded, unfolded and misfolded states without human supervision. The method has to date improved resolution and accuracy of atomic contacts within the protein structure with a 50X speedup in sampling folded states. Using the approach, the team acquired a 10,000-fold acceleration of sampling effectiveness compared with traditional molecular dynamics simulations running on specialized hardware.

Atomistic Machine Learning Potentials on Neocortex

“We want to model what we call a potential energy surface, and we want to do it at quantum accuracy,” said Keith Phuthi, a PhD student working with Matthew Guttenberg in Venkat Viswanathan’s group at Carnegie Mellon University. Traditional “density functional theory cannot scale to [the] hundreds to thousands of atoms” needed in many problems in physics, chemistry and materials science. PSC’s Neocortex offered a route beyond that limitation, he added.

The empirical potentials method provides a much simpler analytic form that reduces the cost of the computation in terms of steps per atom. But it is much more approximate and often doesn’t capture details in a molecule’s photoelectronic properties that are required for certain applications. Machine learning potentials offer a bridge between the two methods, offering a better balance of accuracy with computational cost. But current GPU systems limit the data that can be used to train a model, with poor scaling to boot.

By computing invariant atomic features on its SuperDome Flex Server and feeding the data into a CS-1 for prediction, Neocortex enabled Phuthi to model the energy potential of each atom in a given compound as a neural network, summing them to obtain the potential for the molecule.

“Our goal with the early programming [was] to get to this target where we work with much bigger datasets and much bigger molecules than are typically trained on,” and to determine how that scaleup affects accuracy,” Phuthi said. The group has to date run batch sizes larger than 32,000 samples, as compared to a memory-driven limit of about 200 on a GPU system. While the team hasn’t yet optimized parameters, the prediction accuracies are similar.

Physics-Informed Neural Networks (PINNs) for Navier-Stokes Equation

Khemraj Shukla, Assistant professor in the CRUNCH group led by George Em Karniadakis of Brown University, described his use of the WSE technology in solving the Navier-Stokes Equations for the motion of viscous fluids using neural networks. 

“Most of these typical systems have very few high-dimensional data points, but very sparse selection,” he said. “In a conventional approach, it requires forward modeling to run many times to do the system identification (described by partial differential equations), whereas by using  PINNs we can solve the forward and inverse problem with few data-points in one shot”. 

To date Shukla has executed his code for lid driven cavity flow at a Reynold’s number (Re) of 100 on Neocortex, taking a total of 150 seconds. This low Re, signifying a fluid with high relative viscosity and a tendency to exhibit laminar flow, represented a modest starting point for the computations. A similar computation on a V100 GPU took about 10 minutes. Future efforts will include larger Re, which represent lower-viscosity fluids with more complex, turbulent flow, and creating an application programming interface for integrating automated differentiation into the computations.

Wafer-Scale Engines for More than AI

A very different application of WSE technology formed the basis of the final presentation in the panel. Instead of AI, Dirk Van Essendelft, PI of the AI/ML Enhanced CFD group at NETL, described using a CS-1 in cooperation with Cerebras for direct physical simulations for phenomena such as astrophysical events.

“The principle of locality is important” for describing the physics of interacting objects, Van Essendelft said. “This principle holds true for almost all physical systems, outside of quantum entanglement.”

The lack of “spooky action at a distance” outside the quantum realm enables simulators to approximate answers by dividing a problem into a grid of discrete cells. Since each cell in the grid only interacts with its immediate neighbors, at one level the computation is simple. The problem arises when a scientist wants to achieve fine resolution, and the cells become numerous.

Ideally, Van Essendelft said, computing hardware would reproduce the 3D grid as a 3D matrix of processors, with each processor holding the description of the cell it represents and interacting with its immediate neighbors. Conventional distributed computing reproduces this ideal very poorly, with a limited number of processors, slow internal communication, non-localized memory and access to neighboring memory taking thousands of cycles.

The 2D grid of the CS-1’s processors offers a better mirror of the model’s physical grid. With each processor directly interacting with its four nearest neighbors, the local memory can hold field values for a column of cells in the 3D grid, offering access to local and neighbor memory in only a single cycle. Van Essendelft’s BiCgStab Solver achieved near-linear scaling in calculating flow parameters in both a 370-cubed-cell and a 600-cubed-cell mesh.

Footnote: The panelists plan to organize a Cerebras technologies user group to foster information exchange on this promising technology. If you are interested, send an email to [email protected].

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from its predecessors, including the red-hot H100 and A100 GPUs. Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. While Nvidia may not spring to mind when thinking of the quant Read more…

2024 Winter Classic: Meet the HPE Mentors

March 18, 2024

The latest installment of the 2024 Winter Classic Studio Update Show features our interview with the HPE mentor team who introduced our student teams to the joys (and potential sorrows) of the HPL (LINPACK) and accompany Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the field was normalized for boys in 1969 when the Apollo 11 missi Read more…

Apple Buys DarwinAI Deepening its AI Push According to Report

March 14, 2024

Apple has purchased Canadian AI startup DarwinAI according to a Bloomberg report today. Apparently the deal was done early this year but still hasn’t been publicly announced according to the report. Apple is preparing Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimization algorithms to iteratively refine their parameters until Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the fi Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimizat Read more…

PASQAL Issues Roadmap to 10,000 Qubits in 2026 and Fault Tolerance in 2028

March 13, 2024

Paris-based PASQAL, a developer of neutral atom-based quantum computers, yesterday issued a roadmap for delivering systems with 10,000 physical qubits in 2026 a Read more…

India Is an AI Powerhouse Waiting to Happen, but Challenges Await

March 12, 2024

The Indian government is pushing full speed ahead to make the country an attractive technology base, especially in the hot fields of AI and semiconductors, but Read more…

Charles Tahan Exits National Quantum Coordination Office

March 12, 2024

(March 1, 2024) My first official day at the White House Office of Science and Technology Policy (OSTP) was June 15, 2020, during the depths of the COVID-19 loc Read more…

AI Bias In the Spotlight On International Women’s Day

March 11, 2024

What impact does AI bias have on women and girls? What can people do to increase female participation in the AI field? These are some of the questions the tech Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Analyst Panel Says Take the Quantum Computing Plunge Now…

November 27, 2023

Should you start exploring quantum computing? Yes, said a panel of analysts convened at Tabor Communications HPC and AI on Wall Street conference earlier this y Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Training of 1-Trillion Parameter Scientific AI Begins

November 13, 2023

A US national lab has started training a massive AI brain that could ultimately become the must-have computing resource for scientific researchers. Argonne N Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire