Cerebras Systems, maker of the world’s largest computer chip, has entered into a multi-year partnership with Argonne National Laboratory and Lawrence Livermore National Laboratory as part of a collaboration with the U.S. Department of Energy to advance deep learning for basic and applied science and medicine.
The DOE is Cerebras’ first announced customer win since emerging from stealth to reveal details of its Wafer-Scale Engine (WSE) AI silicon chip at Hot Chips last month. The largest chip ever built, the WSE is 46,225 millimeters square and contains more than 1.2 trillion transistors. Built using TSMC’s 16-nm process, the chip delivers 400,000 Sparse Linear Algebra (SLA) cores, optimized for the sparse linear algebra that is fundamental to neural network calculation. The company’s stated goal is to build a deep learning supercomputer that provides 1,000x the power of a GPU and supports all deep learning frameworks without a change to software.
“The opportunity to incorporate the largest and fastest AI chip ever—the Cerebras WSE—into our advanced computing infrastructure will enable us to dramatically accelerate our deep learning research in science, engineering and health,” said Rick Stevens, head of computing at Argonne National Laboratory in a statement. “It will allow us to invent and test more algorithms, to more rapidly explore ideas, and to more quickly identify opportunities for scientific progress.”
“Integrating Cerebras technology into the Lawrence Livermore National Laboratory supercompute infrastructure will enable us to build a truly unique compute pipeline with massive computation, storage, and thanks to the Wafer Scale Engine dedicated AI processing,” said Bronis R. de Supinski, CTO of Livermore Computing at LLNL. “This unique opportunity for public-private partnership with a cutting edge AI partner will help us meet our mission and push the boundaries of managing the increasingly complex and large data sets from which we have to make decisions.”
AI is a key part of the U.S. exascale strategy and factors prominently into exascale design plans. Argonne National Lab is working to field one of the United States’ first exascale-class systems in 2021, and Lawrence Livermore National Lab is slated to take delivery of an exascale machine (~1.5 exaflops peak) in 2023.
Cerebras indicated that ANL and LLNL are the first two DOE labs it is working with as part of a multi-year, multi-lab contract and that other labs will follow. We’re told that further details on the partnership as well as on the full Cerebras AI system will be announced at Supercomputing (SC19).