San Diego Supercomputer Center Opens ‘Expanse’ to Industry Users

By Tiffany Trader

April 15, 2021

When San Diego Supercomputer Center (SDSC) at the University of California San Diego was getting ready to deploy its flagship Expanse supercomputer for the large research community it supports, it also sought to optimize access for its industrial end user program. Although the new Dell system is funded by the National Science Foundation to primarily serve academic researchers, SDSC came up with an innovative solution to provide cycles to its industry user community through the deployment of a purpose-built, dedicated Expanse rack, delivered as a service via Core Scientific’s Plexus software stack.

“Exposing SDSC’s Expanse supercomputer platform via Core Scientific’s Plexus software stack provides customers with a consumption-based HPC model that not only solves for on-premise infrastructure, but also has the ability to run HPC workloads in supercomputer centers as well as in the any of the four major public cloud providers — all from a single pane of glass,” according to Bellevue, Wash.-based Core Scientific, which builds software solutions for HPC, artificial intelligence and blockchain applications.

SDSC‘s Expanse supercomputer entered full production service in December 2020. Built by Dell, it consists of ~800 AMD 64-core Epyc Rome-based compute nodes with a 12 petabyte parallel file system and HDR InfiniBand. The system is organized into 13 SDSC Scalable Compute Units (SSCUs) — one SSCU per rack — with each comprising 56 standard nodes and four Nvidia V100-powered GPU nodes (Intel-based), connected with 100 GB/s HDR InfiniBand. (Additional system spec details at end.)

Expanse is the successor to Comet, which will be decommissioned this year. And like Comet, Expanse serves the so-called long tail of science users within the NSF community that have wide-ranging and diverse workload requirements.

“The new system brings a number of innovations over Comet, including composable systems and portal based access for scientific workflow support; one of the key features of Expanse is that it’s built on the scalable unit concept,” Ron Hawkins, director of industry relations at SDSC, told HPCwire.

The new Expanse supercomputer at the San Diego Supercomputer Center on the University of California San Diego campus. Image: Owen Stanley, SDSC/UC San Diego

The scalable unit design of Expanse naturally extended itself for SDSC’s industry program, Hawkins said.

Implementing this design at the rack-level makes it simple to bring in additional units as needed, Hawkins explained. With funding from UCSD, the supercomputer center added a dedicated, purpose-built SSCU to serve its industrial program. Because the additional scalable unit is financed by the university, the center can operate it 100 percent on behalf of industrial collaborators with the option to allocate idle capacity to SDSC users, UC San Diego campus researchers, or other science users or collaborators.

To transform this traditional on-prem supercomputer into a private cloud resource, SDSC turned to Core Scientific and the company’s Plexus software stack, which allows SDSC’s industry customers to take advantage of the infrastructure. As a portal to the SDSC resource, Plexus provides a similar function and purpose as the NSF XSEDE interface, but for industry users.

Core Scientific’s Plexus portal showing HPC applications. Source: Core Scientific.

SDSC’s implementation of Core Scientific’s Plexus portal supports multi-tenancy, as well as on-demand / consumption-based pricing. “We can allocate any size job from a single core up to the full capacity of the SSCU,” said Ian Ferreira, Core Scientific’s chief product officer of artificial intelligence.

Hawkins, who coordinates SDSC’s Industry Partners Program, said he expects a wide variety of user workloads. “With the high core count per node (128 cores), we expect that many users will have jobs that fit within a single node,” he said. “In some applications, such as genome analysis, users may run multiple independent analyses on multiple nodes (or via ‘packing’ jobs on a single node) for high throughput computing. We will have to gain some operational experience to understand what the typical job profile will be.”

The Expanse system is well-suited to both traditional HPC and data science kinds of workloads, Hawkins told HPCwire, and the Plexus portal supports both HPC stacks (Singularity, Slurm, LFS, PBS) as well as AI stacks (Docker, Kubernetes). “Scientists get a no-compromise environment to run their models as the lines between traditional HPC and data science/AI continue to blur,” Hawkins added.

Ron Hawkins

SDSC runs a long-standing industrial program that has strong ties to San Diego’s biosciences community, from large pharmaceutical companies to genomics startups. While the majority of program partners come from life sciences, SDSC also works with aerospace, automotive, oil and gas and engineering groups, as well as other companies doing commercial research. “They need the HPC resources, but the industrial program is really aimed at establishing collaborations where we can know leverage each other’s expertise,” said Hawkins.

“Core Scientific is our primary partner for helping us both attract new industrial users and serving the resource to those users via the Plexus platform with that single pane of glass,” he said. “We’ve been tracking that kind of core technology that’s in the Plexus stack now for a few years and we’re eager to put it into practice.”

As for wider potential for the Plexus portal to support scientific users, Hawkins said: “As we get this up and running and provide exposure to our user base, they’ll have the opportunity to take a look and see if it’s a fit for them. The additional scalable unit is focused primarily on our industrial users, but it’s open as well to higher ed and science users that would be outside the NSF sphere, so we can work with other nonprofit research institutes with other universities and foundations as well, so it could benefit the science community in that regard.”

For its part, Core Scientific sees potential in the academic research computing sector. “We’ve reached out to the NSF to say, what would the world look like if we could create a reserve of high performance computing, and aggregate all of that in the U.S., for educational reasons, not necessarily commercial,” said Ferreira. “We welcome the opportunity to create a plexus.org that is free for NSF researchers.”

“[It’s] like a strategic oil reserve, but an HPC reserve that can be deployed when we have the next COVID-type situation,” said Ferreira, describing what sounds a lot like a plan that’s already in motion: the National Strategic Computing Reserve (see our recent coverage).

Of course, HPC resources are too precious to be literally reserved (as in waiting idle), but they are subject to reprioritization; that’s exactly what happened in response to the COVID-19 pandemic on a grand scale, and it’s what happens on a lesser scale (usually satisfied by “discretionary allocations”) for all the usual disasters, seasonal storm, flood and flu modeling, for example. Cloud/HPC cycle brokering itself is not new. RStor, R-Systems, Parallel Works, Rescale, Nimbis Services and UberCloud all play in this space. Cycle Computing briefly offered such a service in its early days, before getting acquired by Microsoft.

Core Scientific says its Plexus AI and HPC platform is used by a number of major companies in industries including healthcare, manufacturing and telecommunications. The company is led by Kevin Turner, former COO of Microsoft (and previously CEO of Sam’s Club and CIO of Walmart). Core Scientific recently achieved AWS High Performance Computing Competency status. The company is also working with Hewlett Packard Enterprise (HPE) to deliver its software solutions in the new HPE GreenLake cloud services for HPC.

The SDSC Expanse Plexus portal is open and ready for use for industrial research and engineering users from across the U.S.

 

The Plexus dashboard. Source: Core Scientific


Expanse architecture:  The Dell system is organized into 13 SDSC Scalable Compute Units (SSCUs), each comprising 56 standard CPU nodes and four GPU nodes, connected with 100 GB/s HDR InfiniBand. Each standard CPU node has dual AMD Rome Epyc processors (64-cores each), 256GB of main memory, 1.6TB NVMe drive, and HDR 100 GB/s interconnect. Each GPU node has four Nvidia V100 GPUs with 32GB GPU memory and NVLink, dual Intel Xeon (6248) host processors (20-cores each), 384GB host memory, 1.6TB NVMe drive, and HDR 100 interconnect.  There is a total of 7,168 compute cores (not including GPU node host cores) and 16 V100 GPUs per SSCU. There is a 12PB “Performance Storage” system based on the Lustre parallel filesystem and a 7PB “Object Storage” system based on the Ceph storage platform. 

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

With New Owner and New Roadmap, an Independent Omni-Path Is Staging a Comeback

July 23, 2021

Put on a shelf by Intel in 2019, Omni-Path faced a uncertain future, but under new custodian Cornelis Networks, OmniPath is looking to make a comeback as an independent high-performance interconnect solution. A "significant refresh" – called Omni-Path Express – is coming later this year according to the company. Cornelis Networks formed last September as a spinout of Intel's Omni-Path division. Read more…

PEARC21 Panel Reviews Eight New NSF-Funded HPC Systems Debuting in 2021

July 23, 2021

Over the past few years, the NSF has funded a number of HPC systems to further supply the open research community with computational resources to meet that community’s changing and expanding needs. A review of these systems at the PEARC21 conference (July 19-22) highlighted... Read more…

Chameleon’s HPC Testbed Sharpens Its Edge, Presses ‘Replay’

July 22, 2021

“One way of saying what I do for a living is to say that I develop scientific instruments,” said Kate Keahey, a senior fellow at the University of Chicago and a computer scientist at Argonne National Laboratory, as s Read more…

PEARC21 Plenary Session: AI for Innovative Social Work

July 21, 2021

AI analysis of social media poses a double-edged sword for social work and addressing the needs of at-risk youths, said Desmond Upton Patton, senior associate dean, Innovation and Academic Affairs, Columbia University. S Read more…

Summer Reading: “High-Performance Computing Is at an Inflection Point”

July 21, 2021

At last month’s 11th International Symposium on Highly Efficient Accelerators and Reconfigurable Technologies (HEART), a group of researchers led by Martin Schulz of the Leibniz Supercomputing Center (Munich) presented a “position paper” in which they argue HPC architectural landscape... Read more…

AWS Solution Channel

Accelerate innovation in healthcare and life sciences with AWS HPC

With Amazon Web Services, researchers can access purpose-built HPC tools and services along with scientific and technical expertise to accelerate the pace of discovery. Whether you are sequencing the human genome, using AI/ML for disease detection or running molecular dynamics simulations to develop lifesaving drugs, AWS has the infrastructure you need to run your HPC workloads. Read more…

PEARC21 Panel: Wafer-Scale-Engine Technology Accelerates Machine Learning, HPC

July 21, 2021

Early use of Cerebras’ CS-1 server and wafer-scale engine (WSE) has demonstrated promising acceleration of machine-learning algorithms, according to participants in the Scientific Research Enabled by CS-1 Systems panel Read more…

With New Owner and New Roadmap, an Independent Omni-Path Is Staging a Comeback

July 23, 2021

Put on a shelf by Intel in 2019, Omni-Path faced a uncertain future, but under new custodian Cornelis Networks, OmniPath is looking to make a comeback as an independent high-performance interconnect solution. A "significant refresh" – called Omni-Path Express – is coming later this year according to the company. Cornelis Networks formed last September as a spinout of Intel's Omni-Path division. Read more…

Chameleon’s HPC Testbed Sharpens Its Edge, Presses ‘Replay’

July 22, 2021

“One way of saying what I do for a living is to say that I develop scientific instruments,” said Kate Keahey, a senior fellow at the University of Chicago a Read more…

Summer Reading: “High-Performance Computing Is at an Inflection Point”

July 21, 2021

At last month’s 11th International Symposium on Highly Efficient Accelerators and Reconfigurable Technologies (HEART), a group of researchers led by Martin Schulz of the Leibniz Supercomputing Center (Munich) presented a “position paper” in which they argue HPC architectural landscape... Read more…

PEARC21 Panel: Wafer-Scale-Engine Technology Accelerates Machine Learning, HPC

July 21, 2021

Early use of Cerebras’ CS-1 server and wafer-scale engine (WSE) has demonstrated promising acceleration of machine-learning algorithms, according to participa Read more…

15 Years Later, the Green500 Continues Its Push for Energy Efficiency as a First-Order Concern in HPC

July 15, 2021

The Green500 list, which ranks the most energy-efficient supercomputers in the world, has virtually always faced an uphill battle. As Wu Feng – custodian of the Green500 list and an associate professor at Virginia Tech – tells it, “noone" cared about energy efficiency in the early 2000s, when the seeds... Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

ExaWind Prepares for New Architectures, Bigger Simulations

July 10, 2021

The ExaWind project describes itself in terms of terms like wake formation, turbine-turbine interaction and blade-boundary-layer dynamics, but the pitch to the Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Berkeley Lab Debuts Perlmutter, World’s Fastest AI Supercomputer

May 27, 2021

A ribbon-cutting ceremony held virtually at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC) today marked the official launch of Perlmutter – aka NERSC-9 – the GPU-accelerated supercomputer built by HPE in partnership with Nvidia and AMD. Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer called Dojo to process truly vast amounts of video data. It’s a beast! … A truly useful exaflop at de facto FP32.” Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months after Red Hat deprecated its support for the widely popular, free CentOS server operating system. The Rocky Linux development effort... Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

Iran Gains HPC Capabilities with Launch of ‘Simorgh’ Supercomputer

May 18, 2021

Iran is said to be developing domestic supercomputing technology to advance the processing of scientific, economic, political and military data, and to strengthen the nation’s position in the age of AI and big data. On Sunday, Iran unveiled the Simorgh supercomputer, which will deliver.... Read more…

Leading Solution Providers

Contributors

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

Microsoft to Provide World’s Most Powerful Weather & Climate Supercomputer for UK’s Met Office

April 22, 2021

More than 14 months ago, the UK government announced plans to invest £1.2 billion ($1.56 billion) into weather and climate supercomputing, including procuremen Read more…

Q&A with Jim Keller, CTO of Tenstorrent, and an HPCwire Person to Watch in 2021

April 22, 2021

As part of our HPCwire Person to Watch series, we are happy to present our interview with Jim Keller, president and chief technology officer of Tenstorrent. One of the top chip architects of our time, Keller has had an impactful career. Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

Senate Debate on Bill to Remake NSF – the Endless Frontier Act – Begins

May 18, 2021

The U.S. Senate today opened floor debate on the Endless Frontier Act which seeks to remake and expand the National Science Foundation by creating a technology Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire