EPCC, the supercomputing center at the University of Edinburgh, today announced plans to deploy a combination HPE/Cerebras system in its growing Edinburgh International Data Facility (EIDF) for academic researchers and data scientists in the public and private sectors. The latest win is yet another sign of traction for Cerebras, one of a number of young AI-specialized chip and systems makers with products in the market.
Last summer EPCC had announced plans for its HPE system (HPE Apollo Systems and HPE Superdome Flex Servers). Incorporating a Cerebras CS-1 system will help EPCC fulfill its mission to explore new computational technologies and deliver data-intensive computing capabilities, EPCC director Mark Parsons told HPCwire.
“I took the decision to invest in a CS-1 because I needed to invest in technology for large-scale AI challenges. Over the past 2 years we’ve been building a large data infrastructure for the EIDF. It’s a large private cloud, and at times I know the users will want to access large numbers of GPUs. I already have a reasonably sized resource in one of my HPC systems (150 NVIDIA V100s). I therefore decided to do something different. Last year HPE won the procurement competition to provide the IT hardware we are using to build the EIDF and part of that procurement allowed me to explore new and emerging technologies – the Cerebras CS-1 fitted the bill perfectly.”

Many are watching how the new AI-systems makers fare. Cerebras, whose wafer scale chip is enormous – 400,000 cores, 1.2 trillion transistors – has several operating and scheduled deployments worth tracking. Argonne National Laboratory, a very early Cerebras user (2019) has been aggressively testing a variety of new AI chips. It put the CS-1 system to work on Covid-19 research this year. Last June, the Pittsburgh Supercomputing Center announced plans to build a new system called Neocortex, which like EPCC, pairs and HPE Superdome Flex server with 2 Cerebras CS-1 systems.
Talking about the PSC deployment, PSC’s chief scientist Nick Nystrom said he saw the opportunity to bring together the best of two worlds – “the extreme deep learning capability of the server CS-1, and the extreme shared memory of the Superdome Flex with HPE. With shared memory, you don’t have to break your problem across many nodes. You don’t have to write MPI, and you don’t have to distribute your data structures. It’s just all there at high speed.”
In industry, big pharma GlaxoSmithKline (GSK) is also working with a CS-1. (For a glimpse into early CS-1 deployments, see HPCwire article, LLNL, ANL and GSK Provide Early Glimpse into Cerebras AI System Performance.)
At EPCC, the CS-1 will be hosted by an HPE Superdome Flex and connected to it using 12 x 100 GbE connections. Data storage on the Superdome Flex is through a 20PB E1000 Lustre filesystem.
Parsons said, “We are also adding in an NVMe storage layer to allow for high performance storage to support the streaming of data to the CS-1. How all of this will work will no doubt need some exploration! In terms of applications, an immediate target is support for natural language processing research that the School of Informatics at the University of Edinburgh is recognized as a global leader. However, we also have planned projects to support some of our Covid-19 research with our College of Medicine and Veterinary Medicine and work on text mining of ancient texts with researchers from our College of Arts, Humanities and Social Sciences.”
The current preference for using CS-1 systems seems to be as separate accelerators for data-intensive and AI tasks with an associated system handling many other traditional tasks. “We will also be exploring the CS-1 as a more general computing device but it’s very early days in that discussion. I do think this will be a really exciting adventure – we’re expecting delivery and installation in March 2021.”
As noted in the EPCC official announcement the CS-1 is built around “the world’s largest processor, the WSE, which is 56 times larger, has 54 times more cores, 450 times more on-chip memory, 5,788 times more memory bandwidth and 20,833 times more fabric bandwidth than the leading graphics processing unit (GPU) competitor.” Marketing bravado aside the WSE and system are impressive, hence the interest from early users.
Talking about Cerebras’s single (albeit giant) chip approach, Steve Conway, senior advisor, HPC market dynamics, Hyperion Research, said, “There’s a lot of interest in Cerebras by HPC buyers. Having so much computing power on a single, tightly architected chip will boost performance, because performance often drops off drastically when a job has to run across multiple chips. On a smaller scale, that’s why designing a CPU and GPU on a single die promises better performance than designing them as separate devices.
“The Cerebras approach should also use less energy. Moving the results of a calculation can expend as much as 100 times as much energy as performing the calculation, so you don’t want to move data and more or farther than you have to. The Cerebras design reduces the distance data has to travel and should make necessary data movement more efficient. The performance and data movement issues are exacerbated by AI workflows, which tend to be very data intensive.”
EPCC has been building out its EIDF with ambitious plans to serve academic and regional commercial users. Here’s brief description excerpted from the EIDF site:
“Most users of the EIDF work in the Data Service Cloud, which offers a rich set of data science and analytics tools: from browser-based notebooks to full desktop environments.
“The Data Service Cloud sits on top of an Analytics-Ready Data Layer (ARD Layer), where EIDF data can be shared and re-used for science and innovation. This ARD Layer will grow over time as more and more data are collected in the EIDF. Innovators and researchers looking for data can search and browse through the Data Catalogue to discover just what analytics-ready data EIDF has, and how they can get access.
“EIDF data managers work with data depositors at the Data Ingest Gateway, ensuring that incoming data are safely stored in the Data Lake Archive Layer, and well-described in the Data Catalogue. Data in the Data Lake are stored for the long term using best practices in digital preservation.
“EIDF data wranglers work in the Data Preparation Layer, often in collaboration with data depositors and others, to turn archived data from the Data Lake into analytics-ready data products in the ARD Layer. They are then ready for data innovators to create new, exciting datasets that can be stored and shared all over again.”
EPCC says it will use the HPE Superdome Flex Server, a high performance front-end storage and pre-processing solution for the CS-1 AI supercomputer. This will enable users to employ large datasets and application-specific pre- and post-processing of data for AI model training and inference on the CS-1, allowing the CS-1s WSE to operate at full bandwidth, reported EPCC. HPE Superdome Flex Server will be robustly provisioned with 18 terabytes of memory, 102 terabytes of high-performance flash storage, 24 Intel Xeon CPUs, and 12 network interface cards to deliver 1.2 terabits per second of data bandwidth to the Cerebras CS-1.
“HPE has a long-standing collaboration with EPCC to develop solutions to some of the most challenging computational problems, and we are excited to be working at this time to provide a highly productive AI platform,” said Mike Woodacre, HPE CTO of HPC & MCS, HPE. “By tightly coupling a Cerebras Wafer Scale Engine with a HPE Superdome Flex Server In-Memory host, we are aiming to enable researchers to tackle complex AI workloads at unprecedented rates.”