PEARC20 Plenary Introduces Five Upcoming NSF-Funded HPC Systems

By Ken Chiacchia, Pittsburgh Supercomputing Center/XSEDE

July 30, 2020

Five new HPC systems—three National Science Foundation-funded “Capacity” systems and two “Innovative Prototype/Testbed” systems—will be coming online through the end of 2021. John Towns, principal investigator (PI) for XSEDE, introduced panelists who described their upcoming systems at the PEARC20 virtual conference on July 29, 2020.

The systems are part of NSF’s “Advanced Computing Systems & Services: Adapting to the Rapid Evolution of Science and Engineering Research” solicitation. The “Capacity” systems, which will support a range of computation and data analytics in science and engineering, are expected to be available for allocation via XSEDE’s process for projects starting Oct 1, 2021. The “Innovative” platforms, which will deploy specialized hardware tailored for artificial intelligence, will be available for early user access in late 2021 followed by a production period as the platforms mature.

The Practice and Experience in Advanced Research Computing (PEARC) Conference Series is a community-driven effort built on the successes of the past, with the aim to grow and be more inclusive by involving additional local, regional, national, and international cyberinfrastructure and research computing partners spanning academia, government and industry. Sponsored by the ACM, the world’s largest educational and scientific computing society, PEARC20 is now taking place online through July 31.

This year’s theme, “Catch the Wave,” embodies the spirit of the community’s drive to stay on pace and in front of all the new waves in technology, analytics, and a globally connected and diverse workforce. Scientific discovery and innovation require a robust, innovative and resilient cyberinfrastructure to support the critical research required to address world challenges in climate change, population, health, energy and environment.

Anvil: Composable, Interactive, User-Focused

Anvil, the first of the three NSF Category I “Capacity Systems,” was introduced by principal investigator Carol Song, senior research scientist and director of Scientific Solutions with Research Computing at Purdue University. Song stressed the capabilities of the $9.9-million system in providing composability and interactivity to meet the increasing demand for computational resources, enable new computational paradigms, expand HPC to non-traditional research domains, and train the next generation of researchers and HPC workforce.

“It’s not just the CPU nodes or the GPU nodes,” Song said. “It’s the entire ecosystem that focuses on getting more users onto the significant resources.”

Partnering Purdue with Dell, DDN, and Nvidia, Anvil will feature:

  • 1,000 nodes based on AMD’s upcoming, liquid-cooled Milan architecture
  • A 100 Gbps HDR Infiniband interconnect
  • 10 PB of disk scratch and 3 PB of flash burst buffer
  • 16 GPU nodes featuring 4 Nvidia Volta Next GPUs per node
  • 32 1 TB large-memory nodes
  • A composable cloud subsystem
  • Archival and persistent storage
  • A production science gateway

The system, which will have a peak performance of 5.3 petaflops, will become operational by Sept. 30, 2021, with early-user access the previous summer. It will be 90% allocated through XSEDE’s XRAC allocations system, with the remainder as discretionary allocation by Purdue.

Delta: The Mark of Change

Bill Gropp, director of the National Center for Supercomputing Applications, University of Illinois Urbana-Champaign, introduced the Category I Delta system. With more than 800 late-model Nvidia GPUs, the $10-million resource will be the largest GPU system by FLOPS in NSF’s portfolio at launch.

Titled after the Greek letter, “the name was chosen to indicate change,” said Gropp, PI of the new resource. “There’s a lot of change in the hardware and software and the way we make use of the systems.” Delta is intended to “help drive a broader adoption of GPU technology past the end of Dennard scaling.”

Delta will feature:

  • A mix of GPU configurations of late-model Nvidia GPUs to enable varied applications, surveying new and emerging research domains that can benefit from the technology
  • A non-POSIX file system that, while presenting a POSIX-like interface, will remove the need for strict adherence to that system’s semantics rules, improving system uptime and performance while allowing most applications to run without modification
  • A rich variety of interfaces, from command-line to science gateways, partnering with the Science Gateway Community Institute to develop practices for blending interactive and batch computing with visualization

Delta, like Anvil, will be 90% allocated through XSEDE, will start operations on Oct. 1, 2020.

Jetstream2: An Approaching Front in Cloud HPC

Jetstream2, the final new NSF Category I system, was introduced by PI David Hancock, director for advanced cyberinfrastructure at Indiana University. Building on the success of the Jetstream system, the new $10-million supercomputer will serve a similar role in interactive, configurable computing for research and education, thanks in part to agreements with Amazon, Google, and Microsoft to support cloud compatibility.

The configuration process for Jetstream2 is in its final phases and is still ongoing, Hancock said. But the new system will feature:

  • An enhanced IaaS model with improved orchestration support, elastic virtual clusters, and federated JupyterHubs
  • A commitment to over 99% uptime
  • A revamped user interface with unified instance management and multi-instance launch
  • Over 57,000 next-gen AMD Epyc cores
  • 360 Nvidia A100 GPUs, providing vGPUs via the MIG feature
  • Over 18 PB of storage
  • 100 GbE Mellanox network

The system, which will combine cyberinfrastructure from Indiana University, Arizona State University, Cornell University, the Texas Advanced Computing Center, and the University of Hawaii, is planned to begin early operations in August 2021 and production by October 2021. Additional partners include the University of Arizona, Johns Hopkins University [Galaxy team], and UCAR [Unidata team]. The system vendor partner for the project will be Dell, Inc. Jetstream2 will be XSEDE-allocated. 

Neocortex: The Next Leap Forward in Deep Learning

Paola Buitrago, director of Artificial Intelligence and Deep Learning at the Pittsburgh Supercomputing Center (PSC) at Carnegie Mellon University and the University of Pittsburgh, presented on the center’s new NSF Category II system, Neocortex. Named for the brain’s center for higher functions, the new machine will serve as an experimental testbed of new technology to accelerate deep learning by orders of magnitude, similar to the sea change introduced by GPU technology in 2012.

“It’s innovative and it’s meant to be exploratory,” PI Buitrago said. “In particular we have one goal that we would like to scale this technology … we aim to engage a wide audience and foster adoption of innovative technologies” in deep learning.

The $5-million system will pair Cerebras’s CS-1 and Hewlett Packard Enterprise (HPE) Superdome Flex technology to provide 800,000 AI-optimized cores with a uniquely quick interconnect. Neocortex will feature:

  • Two Cerebras CS-1 servers, each with a Wafer Scale Engine processor and its high-performance, on-chip memory and interconnect, integrated with an HPE Superdome Flex server via twelve 100 Gb/s ethernet links apiece
  • One HPE Superdome Flex large-memory system, featuring 24 TB of coherent shared memory, 32 Intel Xeon Platinum CPUs, and 205 TB of high-performance NVMe SSD storage
  • High usability, through support of popular TensorFlow and PyTorch frameworks, as well as other means
  • Federation with the upcoming Bridges-2 supercomputing platform via 8 HDR-200 links, enabling complete machine learning workflows and high-speed access to Bridges-2’s 15 PB Lustre file system and 8+ PB of tape archive, which will be jointly managed by the HPE Data Management Framework (DMF).

Neocortex will enter its early user program in the fall of 2020.

Voyager: Specialized Processors, Optimized Software for AI

Voyager, another $5-million NSF Category II system, was introduced by PI Amit Majumdar of the San Diego Supercomputer Center. Beginning with focused select projects in October 2021, the supercomputer will stress specialized processors for training and inference linked with a high-performance interconnect, x86 compute nodes, and a rich storage hierarchy.

“We are most interested to see this as an experimental machine and see its impact and engagement of the … user community,” Majumdar said. “So we will reach out to AI researchers from a wide variety of science, engineering and social sciences [fields], and there will be deep engagement with users.”

Supermicro Inc. and SDSC will jointly deploy Voyager, featuring:

  • Supermicro-integrated, AI-focused hardware to be determined, including specialized training and inference nodes attached to x86 compute nodes
  • Additional x86 nodes
  • Storage with the potential to experiment with different parallel file systems
  • DL frameworks such as TensorFlow and PyTorch
  • Software tools and libraries built for Voyager’s innovative architecture, enabling users to develop new AI techniques

Specific early user applications intended for Voyager will include the use of machine learning to improve trigger, event reconstruction, and signal-to-background in high-energy physics; achieving quantum-modeling-level accuracy in molecular simulations in chemistry, biophysics, and material science; and satellite image analysis.

Voyager will follow a three-year testbed phase focused on select deep user engagement with a minimum of two years of XSEDE-allocation.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

OpenHPC Progress Report – v2.0, More Recipes, Cloud and Arm Support, Says Schulz

October 26, 2020

Launched in late 2015 and transitioned to a Linux Foundation Project in 2016, OpenHPC has marched quietly but steadily forward. Its goal “to provide a reference collection of open-source HPC software components and bes Read more…

By John Russell

NASA Uses Supercomputing to Measure Carbon in the World’s Trees

October 22, 2020

Trees constitute one of the world’s most important carbon sinks, pulling enormous amounts of carbon dioxide from the atmosphere and storing the carbon in their trunks and the surrounding soil. Measuring this carbon sto Read more…

By Oliver Peckham

Nvidia Dominates (Again) Latest MLPerf Inference Results

October 22, 2020

The two-year-old AI benchmarking group MLPerf.org released its second set of inferencing results yesterday and again, as in the most recent MLPerf training results (July 2020), it was almost entirely The Nvidia Show, a p Read more…

By John Russell

With Optane Gaining, Intel Exits NAND Flash

October 21, 2020

In a sign that its 3D XPoint memory technology is gaining traction, Intel Corp. is departing the NAND flash memory and storage market with the sale of its manufacturing base in China to SK Hynix of South Korea. The $9 Read more…

By George Leopold

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing another major EuroHPC design win. Finnish supercomputing cent Read more…

By Oliver Peckham

AWS Solution Channel

Live Webinar: AWS & Intel Research Webinar Series – Fast scaling research workloads on the cloud

Date: 27 Oct – 5 Nov

Join us for the AWS and Intel Research Webinar series.

You will learn how we help researchers process complex workloads, quickly analyze massive data pipelines, store petabytes of data, and advance research using transformative technologies. Read more…

Intel® HPC + AI Pavilion

Berlin Institute of Health: Putting HPC to Work for the World

Researchers from the Center for Digital Health at the Berlin Institute of Health (BIH) are using science to understand the pathophysiology of COVID-19, which can help to inform the development of targeted treatments. Read more…

HPE to Build Australia’s Most Powerful Supercomputer for Pawsey

October 20, 2020

The Pawsey Supercomputing Centre in Perth, Western Australia, has had a busy year. Pawsey typically spends much of its time looking to the stars, working with a variety of observatories and astronomers – but when COVID Read more…

By Oliver Peckham

OpenHPC Progress Report – v2.0, More Recipes, Cloud and Arm Support, Says Schulz

October 26, 2020

Launched in late 2015 and transitioned to a Linux Foundation Project in 2016, OpenHPC has marched quietly but steadily forward. Its goal “to provide a referen Read more…

By John Russell

Nvidia Dominates (Again) Latest MLPerf Inference Results

October 22, 2020

The two-year-old AI benchmarking group MLPerf.org released its second set of inferencing results yesterday and again, as in the most recent MLPerf training resu Read more…

By John Russell

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing a Read more…

By Oliver Peckham

HPE to Build Australia’s Most Powerful Supercomputer for Pawsey

October 20, 2020

The Pawsey Supercomputing Centre in Perth, Western Australia, has had a busy year. Pawsey typically spends much of its time looking to the stars, working with a Read more…

By Oliver Peckham

DDN-Tintri Showcases Technology Integration with Two New Products

October 20, 2020

DDN, a long-time leader in HPC storage, announced two new products today and provided more detail around its strategy for integrating DDN HPC technologies with Read more…

By John Russell

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

ROI: Is HPC Worth It? What Can We Actually Measure?

October 15, 2020

HPC enables innovation and discovery. We all seem to agree on that. Is there a good way to quantify how much that’s worth? Thanks to a sponsored white pape Read more…

By Addison Snell, Intersect360 Research

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Aurora’s Troubles Move Frontier into Pole Exascale Position

October 1, 2020

Intel’s 7nm node delay has raised questions about the status of the Aurora supercomputer that was scheduled to be stood up at Argonne National Laboratory next year. Aurora was in the running to be the United States’ first exascale supercomputer although it was on a contemporaneous timeline with... Read more…

By Tiffany Trader

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Leading Solution Providers

Contributors

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

Microsoft Azure Adds A100 GPU Instances for ‘Supercomputer-Class AI’ in the Cloud

August 19, 2020

Microsoft Azure continues to infuse its cloud platform with HPC- and AI-directed technologies. Today the cloud services purveyor announced a new virtual machine Read more…

By Tiffany Trader

Oracle Cloud Infrastructure Powers Fugaku’s Storage, Scores IO500 Win

August 28, 2020

In June, RIKEN shook the supercomputing world with its Arm-based, Fujitsu-built juggernaut: Fugaku. The system, which weighs in at 415.5 Linpack petaflops, topp Read more…

By Oliver Peckham

DOD Orders Two AI-Focused Supercomputers from Liqid

August 24, 2020

The U.S. Department of Defense is making a big investment in data analytics and AI computing with the procurement of two HPC systems that will provide the High Read more…

By Tiffany Trader

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing a Read more…

By Oliver Peckham

Oracle Cloud Deepens HPC Embrace with Launch of A100 Instances, Plans for Arm, More 

September 22, 2020

Oracle Cloud Infrastructure (OCI) continued its steady ramp-up of HPC capabilities today with a flurry of announcements. Topping the list is general availabilit Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This