Cerebras Doubles AI Performance with Second-Gen 7nm Wafer Scale Engine

By Tiffany Trader

April 20, 2021

Nearly two years since its massive 1.2 trillion transistor Wafer Scale Engine chip debuted at Hot Chips, Cerebras Systems is announcing its second-generation technology (WSE-2), which its says packs twice the performance into the same 8″x8″ silicon footprint.

“We’re going bigger, faster and better in a more power efficient footprint,” Cerebras Founder and CTO Andrew Feldman told HPCwire ahead of today’s launch.

Cerebras datacenter

With 2.6 trillion transistors and 850,000 cores, the WSE-2 more than doubles the elements on the first-gen chip (1.2 trillion transistors, 400,000 cores). The new chip, made by TSMC on its 7nm node, delivers 40 GB of on-chip SRAM memory, 20 petabytes of memory bandwidth and 220 petabits of aggregate fabric bandwidth. Gen over gen, the WSE-2 provides about 2.3X on all major performance metrics, said Feldman.

Compared to the the largest GPU, which has ~54 billion transistors, the WSE-2 is 2.55 trillion transistors larger. Further, Cerebras claims its new platform has 123 times more cores, 1,000 times more on-chip memory, more than 12,000 times the memory bandwidth and more than 45,000 times the fabric bandwidth as the leading GPU.

Both Cerebras’ first and second generation chips are created by removing the largest possible square from a 300 mm wafer to create 46,000 square millimeter chips roughly the size of a dinner plate. An array of repeated identical tiles (84 of them) is built into the wafer, enabling redundancy.

To drive its new engine, Cerebras designed and built its next-generation system, the CS-2, which it bills as the industry’s fastest AI supercomputer. Like the original CS-1, CS-2 employs internal water cooling in a 15 unit rack enclosure with 12 lanes of 100 Gigabit Ethernet.

The new system has a max power draw of 23 kW, up from 20 kW max for the original chassis. “We tried to stay in the original power envelope, and made some changes in the system to take full advantage of the power envelope,” said Feldman.

Feldman said that the last two years have taught him how valuable it is for a system to be physically easy to deploy. The “CS” systems weigh approximately 500 lbs, about the same as 15 RU servers, but deploy in just 15 minutes, he said, adding that cabling projects on a typical cluster can take weeks.

Looking further back to when Cerebras was still designing its first-generation product and charting its go-to-market strategy, Feldman said that he originally underestimated the size of the market due to how quickly the space is moving.

“In my career, I’ve always misestimated on the too big side. I’ve always assumed the market was going to be bigger than it is,” he shared. “In 2015, I estimated the market will be smaller than it is, and the demand for AI and the the rate of innovation.

Cerebras CS-2

“We’re selling a lot of systems to do BERT. BERT didn’t exist the first half of 2018, right? That’s a quick moving market if what hadn’t yet existed until Q3 2018 is [now] the bulk of business. And things that are brand new, like graph neural networks, are piquing people’s interest and top of everybody’s mind. The market is moving unbelievably quickly.”

Making the leap to wafer-scale

As Feldman describes it, codes that have been optimized for CS-1’s 400,000 cores will scale to leverage CS-2’s 850,000 cores without any modification. Further, he attests that GPU codes are easy to port to the Cerebras platform. “We can take as input any TensorFlow or PyTorch model designed for a GPU. You define your model and write it in TensorFlow, that’s your model function. You define your parameters, that’s your input function. All you have to do is take your TensorFlow code, and type one thing: “est = CerebrasEstimator.

CerebrasEstimator

“That’s how you take a model that was written for a GPU and run it on our machine,” he said. “Each layer of your neural network is converted into a region of compute. Then we configure a circuit through it, begin streaming your data, and we send out the answers.” (See figure below.)

The CTO further claimed it is easier to go from a model written for one GPU to a Cerebras CS-1, than it is to go from a model written for one GPU to 20 GPUs.

Primarily dedicated to AI computing, the Cerebras engine is also being applied to HPC workloads. At last year’s Supercomputing Conference, researchers from Cerebras and NETL revealed how the CS-1 was being used perform fast stencil-code computation (a CFD code), demonstrating a speed up of 200X over the largest existing supercomputers.

How big is too big?

Data Streaming on WSE. Source: Cerebras.

With a chip this size, the question arises as to whether users can fill up all that silicon real estate? Are today’s models able to take advantage of an 850,000 core machine? Feldman said there’s no question the demand is there for the larger Cerebras machine, and he sees smaller size platforms (including GPUs) as being primarily for entry-level use cases.

“If you’re happy on one GPU, then we’re not the machine for you,” he said. “But we think that is a small part of the market. As soon as you do work on one GPU, you want more, you want to write bigger models. When you use AI to further the mission of your organization – those are our customers, not the hobbyist.”

Users who need a smaller slice of the Cerebras engine may still be in luck, however. Cerebras plans to announce a cloud offering down the road, according to Feldman.

Some big name wins

Cerebras has racked up a number of key deployments over the last two years, including cornerstone wins with the U.S. Department of Energy, which has CS-1 installations at Argonne National Laboratory and Lawrence Livermore National Laboratory. CS-1 systems are also in place at Pittsburgh Supercomputer Center, EPCC and GlaxoSmithKline, and Cerebras says it has customers in the heavy manufacturing, pharma, biotech, and the military and intelligence sectors.

CS-2 systems will begin shipping in the third quarter of this year, according to Cerebras, and current customers GlaxoSmithKline and Argonne National Lab are expected to be among the the first to take delivery of the upgraded machines.

“At GSK, we are pioneering the use of AI in drug discovery and design,” said Kim Branson, executive vice president of AI, GlaxoSmithKline. “We have been early adopters of the Cerebras technology and have found extraordinary speedups over our legacy infrastructure. We are excited to receive delivery of our CS-2.”

“As an early customer of Cerebras solutions, we have experienced performance gains that have greatly accelerated our scientific and medical AI research,” said Rick Stevens, Argonne National Laboratory associate laboratory director for computing, environment and life sciences, in a statement. “The CS-1 allowed us to reduce the experiment turnaround time on our cancer prediction models by 300X over initial estimates, ultimately enabling us to explore questions that previously would have taken years, in mere months. We look forward to seeing what the CS-2 will be able to do with more than double that performance.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

IBM Research Debuts 2nm Test Chip with 50 Billion Transistors

May 6, 2021

IBM Research today announced the successful prototyping of the world's first 2 nanometer chip, fabricated with silicon nanosheet technology on a standard 300mm bulk wafer. With ~50 billion transistors, the chip will enab Read more…

Supercomputer-Powered CRISPR Simulation Lights Path to Better DNA Editing

May 5, 2021

CRISPR-Cas9 – mostly just known as CRISPR – is a powerful genome editing tool that uses an enzyme (Cas9) to slice off sections of DNA and a guide RNA to repair and modify the DNA as desired, opening the door for cure Read more…

LRZ Announces New Phase of SuperMUC-NG Supercomputer with Intel’s ‘Ponte Vecchio’ GPU

May 5, 2021

At the Leibniz Supercomputing Centre (LRZ) in München, Germany – one of the constituent centers of the Gauss Centre for Supercomputing (GCS) – the SuperMUC-NG system has stood tall for several years, placing 15th on Read more…

HPC Simulations Show How Antibodies Quash SARS-CoV-2

May 5, 2021

Following more than a year of rapid-fire research and pharmaceutical development, nearly a billion COVID-19 vaccine doses have been administered around the world, with many of those vaccines proving remarkably effective Read more…

Crystal Ball Gazing at Nvidia: R&D Chief Bill Dally Talks Targets and Approach

May 4, 2021

There’s no quibbling with Nvidia’s success. Entrenched atop the GPU market, Nvidia has ridden its own inventiveness and growing demand for accelerated computing to meet the needs of HPC and AI. Recently it embarked o Read more…

AWS Solution Channel

FLYING WHALES runs CFD workloads 15 times faster on AWS

FLYING WHALES is a French startup that is developing a 60-ton payload cargo airship for the heavy lift and outsize cargo market. The project was born out of France’s ambition to provide efficient, environmentally friendly transportation for collecting wood in remote areas. Read more…

2021 Winter Classic – Coaches Chat

May 4, 2021

The Winter Classic Invitational Student Cluster Competition raged for all last week and now we’re into the week of judging interviews. Time has been flying. So as we wait for results, let’s dive a bit deeper into t Read more…

IBM Research Debuts 2nm Test Chip with 50 Billion Transistors

May 6, 2021

IBM Research today announced the successful prototyping of the world's first 2 nanometer chip, fabricated with silicon nanosheet technology on a standard 300mm Read more…

Crystal Ball Gazing at Nvidia: R&D Chief Bill Dally Talks Targets and Approach

May 4, 2021

There’s no quibbling with Nvidia’s success. Entrenched atop the GPU market, Nvidia has ridden its own inventiveness and growing demand for accelerated compu Read more…

Intel Invests $3.5 Billion in New Mexico Fab to Focus on Foveros Packaging Technology

May 3, 2021

Intel announced it is investing $3.5 billion in its Rio Rancho, New Mexico, facility to support its advanced 3D manufacturing and packaging technology, Foveros. Read more…

Supercomputer Research Shows Standard Model May Withstand Muon Discrepancy

May 3, 2021

Big news recently struck the physics world: researchers at the Fermi National Accelerator Laboratory (FNAL), in the midst of their Muon g-2 experiment, publishe Read more…

HPC Career Notes: May 2021 Edition

May 3, 2021

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it Read more…

NWChemEx: Computational Chemistry Code for the Exascale Era

April 29, 2021

A team working on biofuel research is rewriting the decades-old NWChem software program for the exascale era. The new software, NWChemEx, will enable computatio Read more…

HPE Will Build Singapore’s New National Supercomputer

April 28, 2021

More than two years ago, Singapore’s National Supercomputing Centre (NSCC) announced a $200 million SGD (~$151 million USD) investment to boost its supercomputing power by an order of magnitude. Today, those plans come closer to fruition with the announcement that Hewlett Packard Enterprise (HPE) has been awarded... Read more…

Arm Details Neoverse V1, N2 Platforms with New Mesh Interconnect, Advances Partner Ecosystem

April 27, 2021

Chip designer Arm Holdings is sharing details about its Neoverse V1 and N2 cores, introducing its new CMN-700 interconnect, and showcasing its partners' plans t Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Leading Solution Providers

Contributors

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2021) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire