San Diego Supercomputer Center Opens ‘Expanse’ to Industry Users

By Tiffany Trader

April 15, 2021

When San Diego Supercomputer Center (SDSC) at the University of California San Diego was getting ready to deploy its flagship Expanse supercomputer for the large research community it supports, it also sought to optimize access for its industrial end user program. Although the new Dell system is funded by the National Science Foundation to primarily serve academic researchers, SDSC came up with an innovative solution to provide cycles to its industry user community through the deployment of a purpose-built, dedicated Expanse rack, delivered as a service via Core Scientific’s Plexus software stack.

“Exposing SDSC’s Expanse supercomputer platform via Core Scientific’s Plexus software stack provides customers with a consumption-based HPC model that not only solves for on-premise infrastructure, but also has the ability to run HPC workloads in supercomputer centers as well as in the any of the four major public cloud providers — all from a single pane of glass,” according to Bellevue, Wash.-based Core Scientific, which builds software solutions for HPC, artificial intelligence and blockchain applications.

SDSC‘s Expanse supercomputer entered full production service in December 2020. Built by Dell, it consists of ~800 AMD 64-core Epyc Rome-based compute nodes with a 12 petabyte parallel file system and HDR InfiniBand. The system is organized into 13 SDSC Scalable Compute Units (SSCUs) — one SSCU per rack — with each comprising 56 standard nodes and four Nvidia V100-powered GPU nodes (Intel-based), connected with 100 GB/s HDR InfiniBand. (Additional system spec details at end.)

Expanse is the successor to Comet, which will be decommissioned this year. And like Comet, Expanse serves the so-called long tail of science users within the NSF community that have wide-ranging and diverse workload requirements.

“The new system brings a number of innovations over Comet, including composable systems and portal based access for scientific workflow support; one of the key features of Expanse is that it’s built on the scalable unit concept,” Ron Hawkins, director of industry relations at SDSC, told HPCwire.

The new Expanse supercomputer at the San Diego Supercomputer Center on the University of California San Diego campus. Image: Owen Stanley, SDSC/UC San Diego

The scalable unit design of Expanse naturally extended itself for SDSC’s industry program, Hawkins said.

Implementing this design at the rack-level makes it simple to bring in additional units as needed, Hawkins explained. With funding from UCSD, the supercomputer center added a dedicated, purpose-built SSCU to serve its industrial program. Because the additional scalable unit is financed by the university, the center can operate it 100 percent on behalf of industrial collaborators with the option to allocate idle capacity to SDSC users, UC San Diego campus researchers, or other science users or collaborators.

To transform this traditional on-prem supercomputer into a private cloud resource, SDSC turned to Core Scientific and the company’s Plexus software stack, which allows SDSC’s industry customers to take advantage of the infrastructure. As a portal to the SDSC resource, Plexus provides a similar function and purpose as the NSF XSEDE interface, but for industry users.

Core Scientific’s Plexus portal showing HPC applications. Source: Core Scientific.

SDSC’s implementation of Core Scientific’s Plexus portal supports multi-tenancy, as well as on-demand / consumption-based pricing. “We can allocate any size job from a single core up to the full capacity of the SSCU,” said Ian Ferreira, Core Scientific’s chief product officer of artificial intelligence.

Hawkins, who coordinates SDSC’s Industry Partners Program, said he expects a wide variety of user workloads. “With the high core count per node (128 cores), we expect that many users will have jobs that fit within a single node,” he said. “In some applications, such as genome analysis, users may run multiple independent analyses on multiple nodes (or via ‘packing’ jobs on a single node) for high throughput computing. We will have to gain some operational experience to understand what the typical job profile will be.”

The Expanse system is well-suited to both traditional HPC and data science kinds of workloads, Hawkins told HPCwire, and the Plexus portal supports both HPC stacks (Singularity, Slurm, LFS, PBS) as well as AI stacks (Docker, Kubernetes). “Scientists get a no-compromise environment to run their models as the lines between traditional HPC and data science/AI continue to blur,” Hawkins added.

Ron Hawkins

SDSC runs a long-standing industrial program that has strong ties to San Diego’s biosciences community, from large pharmaceutical companies to genomics startups. While the majority of program partners come from life sciences, SDSC also works with aerospace, automotive, oil and gas and engineering groups, as well as other companies doing commercial research. “They need the HPC resources, but the industrial program is really aimed at establishing collaborations where we can know leverage each other’s expertise,” said Hawkins.

“Core Scientific is our primary partner for helping us both attract new industrial users and serving the resource to those users via the Plexus platform with that single pane of glass,” he said. “We’ve been tracking that kind of core technology that’s in the Plexus stack now for a few years and we’re eager to put it into practice.”

As for wider potential for the Plexus portal to support scientific users, Hawkins said: “As we get this up and running and provide exposure to our user base, they’ll have the opportunity to take a look and see if it’s a fit for them. The additional scalable unit is focused primarily on our industrial users, but it’s open as well to higher ed and science users that would be outside the NSF sphere, so we can work with other nonprofit research institutes with other universities and foundations as well, so it could benefit the science community in that regard.”

For its part, Core Scientific sees potential in the academic research computing sector. “We’ve reached out to the NSF to say, what would the world look like if we could create a reserve of high performance computing, and aggregate all of that in the U.S., for educational reasons, not necessarily commercial,” said Ferreira. “We welcome the opportunity to create a plexus.org that is free for NSF researchers.”

“[It’s] like a strategic oil reserve, but an HPC reserve that can be deployed when we have the next COVID-type situation,” said Ferreira, describing what sounds a lot like a plan that’s already in motion: the National Strategic Computing Reserve (see our recent coverage).

Of course, HPC resources are too precious to be literally reserved (as in waiting idle), but they are subject to reprioritization; that’s exactly what happened in response to the COVID-19 pandemic on a grand scale, and it’s what happens on a lesser scale (usually satisfied by “discretionary allocations”) for all the usual disasters, seasonal storm, flood and flu modeling, for example. Cloud/HPC cycle brokering itself is not new. RStor, R-Systems, Parallel Works, Rescale, Nimbis Services and UberCloud all play in this space. Cycle Computing briefly offered such a service in its early days, before getting acquired by Microsoft.

Core Scientific says its Plexus AI and HPC platform is used by a number of major companies in industries including healthcare, manufacturing and telecommunications. The company is led by Kevin Turner, former COO of Microsoft (and previously CEO of Sam’s Club and CIO of Walmart). Core Scientific recently achieved AWS High Performance Computing Competency status. The company is also working with Hewlett Packard Enterprise (HPE) to deliver its software solutions in the new HPE GreenLake cloud services for HPC.

The SDSC Expanse Plexus portal is open and ready for use for industrial research and engineering users from across the U.S.

 

The Plexus dashboard. Source: Core Scientific


Expanse architecture:  The Dell system is organized into 13 SDSC Scalable Compute Units (SSCUs), each comprising 56 standard CPU nodes and four GPU nodes, connected with 100 GB/s HDR InfiniBand. Each standard CPU node has dual AMD Rome Epyc processors (64-cores each), 256GB of main memory, 1.6TB NVMe drive, and HDR 100 GB/s interconnect. Each GPU node has four Nvidia V100 GPUs with 32GB GPU memory and NVLink, dual Intel Xeon (6248) host processors (20-cores each), 384GB host memory, 1.6TB NVMe drive, and HDR 100 interconnect.  There is a total of 7,168 compute cores (not including GPU node host cores) and 16 V100 GPUs per SSCU. There is a 12PB “Performance Storage” system based on the Lustre parallel filesystem and a 7PB “Object Storage” system based on the Ceph storage platform. 

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Q&A with Altair CEO James Scapa, an HPCwire Person to Watch in 2021

May 14, 2021

Chairman, CEO and co-founder of Altair James R. Scapa closed several acquisitions for the company in 2020, including the purchase and integration of Univa and Ellexus. Scapa founded Altair more than 35 years ago with two Read more…

HLRS HPC Helps to Model Muscle Movements

May 13, 2021

The growing scale of HPC is allowing simulation of more and more complex systems at greater detail than ever before, particularly in the biological research spheres. Now, researchers at the University of Stuttgart are le Read more…

Behind the Met Office’s Procurement of a Billion-Dollar Microsoft System

May 13, 2021

The UK’s national weather service, the Met Office, caused shockwaves of curiosity a few weeks ago when it formally announced that its forthcoming billion-dollar supercomputer – expected to be the most powerful weather and climate-focused supercomputer in the world when it launches in 2022... Read more…

AMD, GlobalFoundries Commit to $1.6 Billion Wafer Supply Deal

May 13, 2021

AMD plans to purchase $1.6 billion worth of wafers from GlobalFoundries in the 2022 to 2024 timeframe, the chipmaker revealed today (May 13) in an SEC filing. In the face of global semiconductor shortages and record-high demand, AMD is renegotiating its Wafer Supply Agreement and bumping up capacity. Read more…

Hyperion Offers Snapshot of Quantum Computing Market

May 13, 2021

The nascent quantum computer (QC) market will grow 27 percent annually (CAGR) reaching $830 million in 2024 according to an update provided today by analyst firm Hyperion Research at the HPC User Forum being held this we Read more…

AWS Solution Channel

Numerical weather prediction on AWS Graviton2

The Weather Research and Forecasting (WRF) model is a numerical weather prediction (NWP) system designed to serve both atmospheric research and operational forecasting needs. Read more…

Hyperion: HPC Server Market Ekes 1 Percent Gain in 2020, Storage Poised for ‘Tipping Point’

May 12, 2021

The HPC User Forum meeting taking place virtually this week (May 11-13) kicked off with Hyperion Research’s market update, covering the 2020 period. Although the HPC server market had been facing a 6.7 percent COVID-re Read more…

Behind the Met Office’s Procurement of a Billion-Dollar Microsoft System

May 13, 2021

The UK’s national weather service, the Met Office, caused shockwaves of curiosity a few weeks ago when it formally announced that its forthcoming billion-dollar supercomputer – expected to be the most powerful weather and climate-focused supercomputer in the world when it launches in 2022... Read more…

AMD, GlobalFoundries Commit to $1.6 Billion Wafer Supply Deal

May 13, 2021

AMD plans to purchase $1.6 billion worth of wafers from GlobalFoundries in the 2022 to 2024 timeframe, the chipmaker revealed today (May 13) in an SEC filing. In the face of global semiconductor shortages and record-high demand, AMD is renegotiating its Wafer Supply Agreement and bumping up capacity. Read more…

Hyperion Offers Snapshot of Quantum Computing Market

May 13, 2021

The nascent quantum computer (QC) market will grow 27 percent annually (CAGR) reaching $830 million in 2024 according to an update provided today by analyst fir Read more…

Hyperion: HPC Server Market Ekes 1 Percent Gain in 2020, Storage Poised for ‘Tipping Point’

May 12, 2021

The HPC User Forum meeting taking place virtually this week (May 11-13) kicked off with Hyperion Research’s market update, covering the 2020 period. Although Read more…

IBM Debuts Qiskit Runtime for Quantum Computing; Reports Dramatic Speed-up

May 11, 2021

In conjunction with its virtual Think event, IBM today introduced an enhanced Qiskit Runtime Software for quantum computing, which it says demonstrated 120x spe Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Fast Pass Through (Some of) the Quantum Landscape with ORNL’s Raphael Pooser

May 7, 2021

In a rather remarkable way, and despite the frequent hype, the behind-the-scenes work of developing quantum computing has dramatically accelerated in the past f Read more…

IBM Research Debuts 2nm Test Chip with 50 Billion Transistors

May 6, 2021

IBM Research today announced the successful prototyping of the world's first 2 nanometer chip, fabricated with silicon nanosheet technology on a standard 300mm Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

Microsoft to Provide World’s Most Powerful Weather & Climate Supercomputer for UK’s Met Office

April 22, 2021

More than 14 months ago, the UK government announced plans to invest £1.2 billion ($1.56 billion) into weather and climate supercomputing, including procuremen Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire