San Diego Supercomputer Center Opens ‘Expanse’ to Industry Users

By Tiffany Trader

April 15, 2021

When San Diego Supercomputer Center (SDSC) at the University of California San Diego was getting ready to deploy its flagship Expanse supercomputer for the large research community it supports, it also sought to optimize access for its industrial end user program. Although the new Dell system is funded by the National Science Foundation to primarily serve academic researchers, SDSC came up with an innovative solution to provide cycles to its industry user community through the deployment of a purpose-built, dedicated Expanse rack, delivered as a service via Core Scientific’s Plexus software stack.

“Exposing SDSC’s Expanse supercomputer platform via Core Scientific’s Plexus software stack provides customers with a consumption-based HPC model that not only solves for on-premise infrastructure, but also has the ability to run HPC workloads in supercomputer centers as well as in the any of the four major public cloud providers — all from a single pane of glass,” according to Bellevue, Wash.-based Core Scientific, which builds software solutions for HPC, artificial intelligence and blockchain applications.

SDSC‘s Expanse supercomputer entered full production service in December 2020. Built by Dell, it consists of ~800 AMD 64-core Epyc Rome-based compute nodes with a 12 petabyte parallel file system and HDR InfiniBand. The system is organized into 13 SDSC Scalable Compute Units (SSCUs) — one SSCU per rack — with each comprising 56 standard nodes and four Nvidia V100-powered GPU nodes (Intel-based), connected with 100 GB/s HDR InfiniBand. (Additional system spec details at end.)

Expanse is the successor to Comet, which will be decommissioned this year. And like Comet, Expanse serves the so-called long tail of science users within the NSF community that have wide-ranging and diverse workload requirements.

“The new system brings a number of innovations over Comet, including composable systems and portal based access for scientific workflow support; one of the key features of Expanse is that it’s built on the scalable unit concept,” Ron Hawkins, director of industry relations at SDSC, told HPCwire.

The new Expanse supercomputer at the San Diego Supercomputer Center on the University of California San Diego campus. Image: Owen Stanley, SDSC/UC San Diego

The scalable unit design of Expanse naturally extended itself for SDSC’s industry program, Hawkins said.

Implementing this design at the rack-level makes it simple to bring in additional units as needed, Hawkins explained. With funding from UCSD, the supercomputer center added a dedicated, purpose-built SSCU to serve its industrial program. Because the additional scalable unit is financed by the university, the center can operate it 100 percent on behalf of industrial collaborators with the option to allocate idle capacity to SDSC users, UC San Diego campus researchers, or other science users or collaborators.

To transform this traditional on-prem supercomputer into a private cloud resource, SDSC turned to Core Scientific and the company’s Plexus software stack, which allows SDSC’s industry customers to take advantage of the infrastructure. As a portal to the SDSC resource, Plexus provides a similar function and purpose as the NSF XSEDE interface, but for industry users.

Core Scientific’s Plexus portal showing HPC applications. Source: Core Scientific.

SDSC’s implementation of Core Scientific’s Plexus portal supports multi-tenancy, as well as on-demand / consumption-based pricing. “We can allocate any size job from a single core up to the full capacity of the SSCU,” said Ian Ferreira, Core Scientific’s chief product officer of artificial intelligence.

Hawkins, who coordinates SDSC’s Industry Partners Program, said he expects a wide variety of user workloads. “With the high core count per node (128 cores), we expect that many users will have jobs that fit within a single node,” he said. “In some applications, such as genome analysis, users may run multiple independent analyses on multiple nodes (or via ‘packing’ jobs on a single node) for high throughput computing. We will have to gain some operational experience to understand what the typical job profile will be.”

The Expanse system is well-suited to both traditional HPC and data science kinds of workloads, Hawkins told HPCwire, and the Plexus portal supports both HPC stacks (Singularity, Slurm, LFS, PBS) as well as AI stacks (Docker, Kubernetes). “Scientists get a no-compromise environment to run their models as the lines between traditional HPC and data science/AI continue to blur,” Hawkins added.

Ron Hawkins

SDSC runs a long-standing industrial program that has strong ties to San Diego’s biosciences community, from large pharmaceutical companies to genomics startups. While the majority of program partners come from life sciences, SDSC also works with aerospace, automotive, oil and gas and engineering groups, as well as other companies doing commercial research. “They need the HPC resources, but the industrial program is really aimed at establishing collaborations where we can know leverage each other’s expertise,” said Hawkins.

“Core Scientific is our primary partner for helping us both attract new industrial users and serving the resource to those users via the Plexus platform with that single pane of glass,” he said. “We’ve been tracking that kind of core technology that’s in the Plexus stack now for a few years and we’re eager to put it into practice.”

As for wider potential for the Plexus portal to support scientific users, Hawkins said: “As we get this up and running and provide exposure to our user base, they’ll have the opportunity to take a look and see if it’s a fit for them. The additional scalable unit is focused primarily on our industrial users, but it’s open as well to higher ed and science users that would be outside the NSF sphere, so we can work with other nonprofit research institutes with other universities and foundations as well, so it could benefit the science community in that regard.”

For its part, Core Scientific sees potential in the academic research computing sector. “We’ve reached out to the NSF to say, what would the world look like if we could create a reserve of high performance computing, and aggregate all of that in the U.S., for educational reasons, not necessarily commercial,” said Ferreira. “We welcome the opportunity to create a that is free for NSF researchers.”

“[It’s] like a strategic oil reserve, but an HPC reserve that can be deployed when we have the next COVID-type situation,” said Ferreira, describing what sounds a lot like a plan that’s already in motion: the National Strategic Computing Reserve (see our recent coverage).

Of course, HPC resources are too precious to be literally reserved (as in waiting idle), but they are subject to reprioritization; that’s exactly what happened in response to the COVID-19 pandemic on a grand scale, and it’s what happens on a lesser scale (usually satisfied by “discretionary allocations”) for all the usual disasters, seasonal storm, flood and flu modeling, for example. Cloud/HPC cycle brokering itself is not new. RStor, R-Systems, Parallel Works, Rescale, Nimbis Services and UberCloud all play in this space. Cycle Computing briefly offered such a service in its early days, before getting acquired by Microsoft.

Core Scientific says its Plexus AI and HPC platform is used by a number of major companies in industries including healthcare, manufacturing and telecommunications. The company is led by Kevin Turner, former COO of Microsoft (and previously CEO of Sam’s Club and CIO of Walmart). Core Scientific recently achieved AWS High Performance Computing Competency status. The company is also working with Hewlett Packard Enterprise (HPE) to deliver its software solutions in the new HPE GreenLake cloud services for HPC.

The SDSC Expanse Plexus portal is open and ready for use for industrial research and engineering users from across the U.S.


The Plexus dashboard. Source: Core Scientific

Expanse architecture:  The Dell system is organized into 13 SDSC Scalable Compute Units (SSCUs), each comprising 56 standard CPU nodes and four GPU nodes, connected with 100 GB/s HDR InfiniBand. Each standard CPU node has dual AMD Rome Epyc processors (64-cores each), 256GB of main memory, 1.6TB NVMe drive, and HDR 100 GB/s interconnect. Each GPU node has four Nvidia V100 GPUs with 32GB GPU memory and NVLink, dual Intel Xeon (6248) host processors (20-cores each), 384GB host memory, 1.6TB NVMe drive, and HDR 100 interconnect.  There is a total of 7,168 compute cores (not including GPU node host cores) and 16 V100 GPUs per SSCU. There is a 12PB “Performance Storage” system based on the Lustre parallel filesystem and a 7PB “Object Storage” system based on the Ceph storage platform. 


Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

IBM Research Debuts 2nm Test Chip with 50 Billion Transistors

May 6, 2021

IBM Research today announced the successful prototyping of the world's first 2 nanometer chip, fabricated with silicon nanosheet technology on a standard 300mm bulk wafer. With ~50 billion transistors, the chip will enab Read more…

Supercomputer-Powered CRISPR Simulation Lights Path to Better DNA Editing

May 5, 2021

CRISPR-Cas9 – mostly just known as CRISPR – is a powerful genome editing tool that uses an enzyme (Cas9) to slice off sections of DNA and a guide RNA to repair and modify the DNA as desired, opening the door for cure Read more…

LRZ Announces New Phase of SuperMUC-NG Supercomputer with Intel’s ‘Ponte Vecchio’ GPU

May 5, 2021

At the Leibniz Supercomputing Centre (LRZ) in München, Germany – one of the constituent centers of the Gauss Centre for Supercomputing (GCS) – the SuperMUC-NG system has stood tall for several years, placing 15th on Read more…

HPC Simulations Show How Antibodies Quash SARS-CoV-2

May 5, 2021

Following more than a year of rapid-fire research and pharmaceutical development, nearly a billion COVID-19 vaccine doses have been administered around the world, with many of those vaccines proving remarkably effective Read more…

Crystal Ball Gazing at Nvidia: R&D Chief Bill Dally Talks Targets and Approach

May 4, 2021

There’s no quibbling with Nvidia’s success. Entrenched atop the GPU market, Nvidia has ridden its own inventiveness and growing demand for accelerated computing to meet the needs of HPC and AI. Recently it embarked o Read more…

AWS Solution Channel

FLYING WHALES runs CFD workloads 15 times faster on AWS

FLYING WHALES is a French startup that is developing a 60-ton payload cargo airship for the heavy lift and outsize cargo market. The project was born out of France’s ambition to provide efficient, environmentally friendly transportation for collecting wood in remote areas. Read more…

2021 Winter Classic – Coaches Chat

May 4, 2021

The Winter Classic Invitational Student Cluster Competition raged for all last week and now we’re into the week of judging interviews. Time has been flying. So as we wait for results, let’s dive a bit deeper into t Read more…

IBM Research Debuts 2nm Test Chip with 50 Billion Transistors

May 6, 2021

IBM Research today announced the successful prototyping of the world's first 2 nanometer chip, fabricated with silicon nanosheet technology on a standard 300mm Read more…

Crystal Ball Gazing at Nvidia: R&D Chief Bill Dally Talks Targets and Approach

May 4, 2021

There’s no quibbling with Nvidia’s success. Entrenched atop the GPU market, Nvidia has ridden its own inventiveness and growing demand for accelerated compu Read more…

Intel Invests $3.5 Billion in New Mexico Fab to Focus on Foveros Packaging Technology

May 3, 2021

Intel announced it is investing $3.5 billion in its Rio Rancho, New Mexico, facility to support its advanced 3D manufacturing and packaging technology, Foveros. Read more…

Supercomputer Research Shows Standard Model May Withstand Muon Discrepancy

May 3, 2021

Big news recently struck the physics world: researchers at the Fermi National Accelerator Laboratory (FNAL), in the midst of their Muon g-2 experiment, publishe Read more…

HPC Career Notes: May 2021 Edition

May 3, 2021

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it Read more…

NWChemEx: Computational Chemistry Code for the Exascale Era

April 29, 2021

A team working on biofuel research is rewriting the decades-old NWChem software program for the exascale era. The new software, NWChemEx, will enable computatio Read more…

HPE Will Build Singapore’s New National Supercomputer

April 28, 2021

More than two years ago, Singapore’s National Supercomputing Centre (NSCC) announced a $200 million SGD (~$151 million USD) investment to boost its supercomputing power by an order of magnitude. Today, those plans come closer to fruition with the announcement that Hewlett Packard Enterprise (HPE) has been awarded... Read more…

Arm Details Neoverse V1, N2 Platforms with New Mesh Interconnect, Advances Partner Ecosystem

April 27, 2021

Chip designer Arm Holdings is sharing details about its Neoverse V1 and N2 cores, introducing its new CMN-700 interconnect, and showcasing its partners' plans t Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from I Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Leading Solution Providers


Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2021) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective Read more…

  • arrow
  • Click Here for More Headlines
  • arrow