NERSC Awards 250K GPU Node Hours on Perlmutter to 14 QIS Projects

January 25, 2022

Jan. 25, 2022 — Following a request for proposals issued in November 2021, NERSC has awarded a total of 250,000 Perlmutter GPU node hours to 14 quantum information science (QIS) projects. The awards were made available through the NERSC [email protected] program, with time allocated from the NERSC Director’s Reserve. The goal of the program is to allow researchers to use Perlmutter to help develop QIS devices and techniques for the advancement of science.

All areas of quantum information science were encouraged to apply, including quantum simulation of materials and chemical systems, algorithms for compilation of quantum circuits, error mitigation for quantum computing, and development and testing of hybrid quantum-classical algorithms.

Awards were made to researchers in fields ranging from materials science, chemistry, and computer science to high energy physics, machine learning, applied mathematics, and condensed matter physics. Nine are from national laboratories, four from industry, and one from academia.

Here are the 14 QIS projects being awarded GPU node hours on Perlmutter:

Quantum Computing for Materials Science: Simulation of Defects in Materials for Quantum Information Science

  • Principal Investigator: Marco Govoni, Argonne National Laboratory
  • Science Area: Materials Science
  • GPU Node Hours: 25K
  • Point defects in semiconductors are promising candidates for qubits and quantum sensors. In order to realize this technology, large-scale quantum simulations of correlated electronic states are needed to understand the optoelectronic properties of these materials.

Scalable Noisy Quantum Circuit Simulation through NWQSim

  • Principal Investigator: Ang Li, Pacific Northwest National Laboratory
  • Co-Investigators: NVIDIA’s cuQuantum and NVSHMEM teams
  • Science Areas: Computer Science, Chemistry
  • GPU Node Hours: 20K
  • Simulations of quantum programs in classical HPC systems are essential for validating quantum results, understanding the effects of noise, and designing robust quantum algorithms. The goal of this project is to use Perlmutter to analyze the fidelity of quantum circuit execution for algorithms for quantum chemistry and quantum error correction.

‘Divide and Conquer’ Approach to Machine Learning-Based Decoders for the Surface Code

  • Principal Investigator: Ritajit Majumdar, Indian Statistical Institute
  • Science Area: Computer Science
  • GPU Node Hours: 25K
  • Surface codes are a family of quantum error correcting codes expected to be the model of error correction for designing a noiseless quantum computer. Better decoders are needed for large-scale application of surface codes. In particular, machine learning based decoders learn the error probability of the circuit, have a linear decoding time, and thus outperform many decoders in performance.

Maximum Likelihood Estimation of Parameterized Quantum Noise Models

  • Principal Investigator: Vincent R. Pascuzzi, Brookhaven National Laboratory
  • Science Area: Computer Science (error correction)
  • GPU Node Hours: 12.5K
  • Models for quantum noise are necessary for NISQ devices (which don’t have full error correction) to achieve quantum advantage. This project aims to use quantum data and maximum likelihood estimation to enhance these models, supporting hardware and software co-design.

Surrogate Models for Variational Quantum Algorithms

  • Principal Investigator: Wim Lavrijsen, Lawrence Berkeley National Laboratory
  • Co-Investigators: Juliane Müller, Ed Younis, and Costin Iancu, Lawrence Berkeley National Laboratory
  • Science Area: Computer Science, Optimization
  • GPU Node Hours: 25K
  • Variational quantum algorithms (VQAs) combine classical optimizers with quantum hardware to achieve an overall optimization goal. In a VQA, the QPU evaluates a costly objective function, while the classical side updates the optimization parameters. Surrogate methods, computationally cheap approximations of the costly objective function, can be used to guide the selection of points in the optimization search.

Quantum Circuit Synthesis via Large-Scale Randomized Optimizations

  • Principal Investigator: Yu-Hang Tang, Lawrence Berkeley National Laboratory
  • Science Area: Computer Science
  • GPU Node Hours: 5K
  • Robust, scalable, and approximate circuit synthesis is an indispensable step in the quantum workflow, requiring large-scale classical computations.

Large-scale Hybrid Quantum Tasking and Simulation with PennyLane

  • Principal Investigator: Lee J. O’Riordan, Xanadu Quantum Technologies Inc.
  • Co-Investigators: Sean Oh, Xanadu Quantum Technologies Inc.
  • Science Area: Computer Science, Machine Learning
  • GPU Node Hours: 25K
  • Workflows that incorporate quantum and classical components are becoming increasingly common. These workflows can be supported on Xanadu’s open-source Pennylane software that, with Perlmutter’s resources, will allow users to run large-scale hybrid-quantum computations effectively, benchmark performance on quantum hardware and simulators, and investigate the possibility of demonstrating quantum advantage in machine learning applications.

Quantum Deep Learning for High Energy Physics Data Analysis

  • Principal Investigator: Shinjae Yoo, Brookhaven National Laboratory
  • Co-Investigators: Prof. Sau Lan Wu, University of Wisconsin-Madison
  • Science Area: High Energy Physics, Machine Learning
  • GPU Node Hours: 2.5K
  • There is hope that quantum machine learning could outperform classical machine learning in classification power by exploiting a large number of qubits. One way to work towards this goal is by uniting high-energy physics analysis techniques with quantum computing advances.

Implementation of Large Qubitization Iterates on a Tensor Network Quantum Simulator

  • Principal Investigator: Nathan Fitzpatrick, Cambridge Quantum Computing
  • Science Area: Computer Science, Chemistry
  • GPU Node Hours: 25K
  • Qubitization is one of the leading approaches for quantum advantage in Hamiltonian simulation, but it has so far been difficult to test computationally because of the large number of qubits required. Perlmutter’s GPUs combined with NVIDIA’s cuTensorNet SDK will enable preparing and testing these fault tolerant circuit primitives.

The Entanglement Barrier in the Quantum Approximate Optimization Algorithm

  • Principal Investigator: Matthew Reagor, Rigetti Computing
  • Co-Investigators: Maxime Dupont, Lawrence Berkeley National Laboratory
  • Science Area: Optimization
  • GPU Node Hours: 25K
  • The Quantum Approximate Optimization Algorithm (QAOA) is a hybrid quantum-classical algorithm seeking to solve combinatorial optimization problems. Investigating the role of entanglement in QAOA on fixed lattices of physical qubits, as for solid state qubit fabrics, will shed light on the limit of classical simulations and pave the way towards quantum advantage.

Benchmarking QCLAB++ on Perlmutter GPUs

  • Principal Investigator: Roel van Beeumen, Lawrence Berkeley National Laboratory
  • Science Area: Applied Mathematics, Computer Science
  • GPU Node Hours: 12.5K
  • GPU-enabled high performance quantum linear algebra computations are necessary for efficient quantum compilation of quantum circuits.

Simulating Boson Localization with Quantum Computers

  • Principal Investigator: Lindsay Bassman, Lawrence Berkeley National Laboratory
  • Science Area: Condensed Matter Physics, Materials Science
  • GPU Node Hours: 5K
  • Understanding disorder-induced phase transitions is important for utilizing cold atoms and superconductors in quantum hardware. Computationally studying quantum phases of matter requires large-scale quantum simulators and HPC resources.

Large-Scale Model-Based Optimization by Quantum Monte Carlo Integration

  • Principal Investigator: Kwangmin Yu, Brookhaven National Laboratory
  • Science Area: Optimization
  • GPU Node Hours: 25K
  • Quantum computing holds the promise of fundamentally altering the computing landscape for applying optimization techniques to address large scale problems of practical interest, which are considered intractable on even the world’s fastest classical supercomputers. Developing distributed and hybrid quantum-classical algorithms for optimal decision making that can scale to problems involving thousands of decision variables requires distributed and GPU-accelerated computing nodes.

Quantum-Inspired Approaches for Full Configuration Interaction

  • Principal Investigator: Robert M. Parrish, QC Ware Corporation
  • Co-Investigators: Sam Stanwyk, NVIDIA
  • Science Area: Computer Science, Chemistry
  • GPU Node Hours: 25K
  • New quantum algorithms are needed for large-scale quantum chemistry calculations. Perlmutter will enable computer scientists to develop and simulate how these algorithms will work on quantum computers.

About NERSC and Berkeley Lab

The National Energy Research Scientific Computing Center (NERSC) is a U.S. Department of Energy Office of Science User Facility that serves as the primary high-performance computing center for scientific research sponsored by the Office of Science. Located at Lawrence Berkeley National Laboratory, the NERSC Center serves more than 7,000 scientists at national laboratories and universities researching a wide range of problems in combustion, climate modeling, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a DOE national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the U.S. Department of Energy. »Learn more about computing sciences at Berkeley Lab.


Source: NERSC

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

LRZ Adds Mega AI Aystem as It Stacks up on Future Computing Systems

May 25, 2022

The battle among high-performance computing hubs to stack up on cutting-edge computers for quicker time to science is getting steamy as new chip technologies become mainstream. A European supercomputing hub near Munich, called the Leibniz Supercomputing Centre, is deploying Cerebras Systems' CS-2 AI system as part of an internal initiative called Future Computing to assess alternative computing... Read more…

Nvidia Launches Four Arm-based Grace Server Designs

May 25, 2022

Nvidia is lining up Arm-based server platforms for a diverse range of HPC, AI and cloud applications. The new systems employ Nvidia’s custom Grace Arm CPUs in four different configurations, including a Grace Hopper HGX Read more…

Nvidia Bakes Liquid Cooling into PCIe GPU Cards

May 24, 2022

Nvidia is bringing liquid cooling, which it typically puts alongside GPUs on the high-performance computing systems, to its mainstream server GPU portfolio. The company will start shipping its A100 PCIe Liquid Cooled GPU, which is based on the Ampere architecture, for servers later this year. The liquid-cooled GPU based on the company's new Hopper architecture for PCIe slots will ship early next year. Read more…

Durham University to Test Rockport Networks on COSMA7 Supercomputer

May 24, 2022

Durham University’s Institute for Computational Cosmology (ICC) is home to the COSMA series of supercomputers (short for “cosmological machine”). COSMA—now in its eighth iteration, COSMA8—has been working to an Read more…

SoftIron Measures Its Carbon Footprint to Make a Point

May 24, 2022

Since its founding in 2012, London-based software-defined storage provider SoftIron has been making its case for what it calls secure provenance: a term that encompasses the company’s rigorous accounting of the supply Read more…

AWS Solution Channel

Shutterstock 1044740602

DTN Doubles Weather Forecasting Performance Using Amazon EC2 Hpc6a Instances

Organizations in weather-sensitive industries need highly accurate and near-real-time weather intelligence to make adept business decisions. Many companies in these industries rely on information from DTN, a global data, analytics, and technology company, for that information. Read more…

TACC Adds Details to Vision for Leadership-Class Computing Facility

May 23, 2022

The Texas Advanced Computing Center (TACC) at The University of Texas at Austin passed to the next phase of the planning process for the Leadership-Class Computing Facility (LCCF), a process that has many approval stage Read more…

LRZ Adds Mega AI Aystem as It Stacks up on Future Computing Systems

May 25, 2022

The battle among high-performance computing hubs to stack up on cutting-edge computers for quicker time to science is getting steamy as new chip technologies become mainstream. A European supercomputing hub near Munich, called the Leibniz Supercomputing Centre, is deploying Cerebras Systems' CS-2 AI system as part of an internal initiative called Future Computing to assess alternative computing... Read more…

Nvidia Launches Four Arm-based Grace Server Designs

May 25, 2022

Nvidia is lining up Arm-based server platforms for a diverse range of HPC, AI and cloud applications. The new systems employ Nvidia’s custom Grace Arm CPUs in Read more…

Nvidia Bakes Liquid Cooling into PCIe GPU Cards

May 24, 2022

Nvidia is bringing liquid cooling, which it typically puts alongside GPUs on the high-performance computing systems, to its mainstream server GPU portfolio. The company will start shipping its A100 PCIe Liquid Cooled GPU, which is based on the Ampere architecture, for servers later this year. The liquid-cooled GPU based on the company's new Hopper architecture for PCIe slots will ship early next year. Read more…

Durham University to Test Rockport Networks on COSMA7 Supercomputer

May 24, 2022

Durham University’s Institute for Computational Cosmology (ICC) is home to the COSMA series of supercomputers (short for “cosmological machine”). COSMA— Read more…

SoftIron Measures Its Carbon Footprint to Make a Point

May 24, 2022

Since its founding in 2012, London-based software-defined storage provider SoftIron has been making its case for what it calls secure provenance: a term that en Read more…

ISC 2022: International Association of Supercomputing Centers to Debut

May 23, 2022

At ISC 2022 in Hamburg, Germany, representatives from four supercomputing centers across three countries plan to debut the International Association of Supercom Read more…

ANL Special Colloquium on The Future of Computing

May 19, 2022

There are, of course, a myriad of ideas regarding computing’s future. At yesterday’s Argonne National Laboratory’s Director’s Special Colloquium, The Future of Computing, guest speaker Sadasivan Shankar, did his best to convince the audience that the high-energy cost of the current computing paradigm – not (just) economic cost; we’re talking entropy here – is fundamentally undermining computing’s progress such that... Read more…

HPE Announces New HPC Factory in Czech Republic

May 18, 2022

A week ahead of ISC High Performance 2022 (set to be held in Hamburg, Germany), supercomputing heavyweight HPE has announced a major investment in sovereign Eur Read more…

Nvidia R&D Chief on How AI is Improving Chip Design

April 18, 2022

Getting a glimpse into Nvidia’s R&D has become a regular feature of the spring GTC conference with Bill Dally, chief scientist and senior vice president of research, providing an overview of Nvidia’s R&D organization and a few details on current priorities. This year, Dally focused mostly on AI tools that Nvidia is both developing and using in-house to improve... Read more…

Royalty-free stock illustration ID: 1919750255

Intel Says UCIe to Outpace PCIe in Speed Race

May 11, 2022

Intel has shared more details on a new interconnect that is the foundation of the company’s long-term plan for x86, Arm and RISC-V architectures to co-exist in a single chip package. The semiconductor company is taking a modular approach to chip design with the option for customers to cram computing blocks such as CPUs, GPUs and AI accelerators inside a single chip package. Read more…

AMD/Xilinx Takes Aim at Nvidia with Improved VCK5000 Inferencing Card

March 8, 2022

AMD/Xilinx has released an improved version of its VCK5000 AI inferencing card along with a series of competitive benchmarks aimed directly at Nvidia’s GPU line. AMD says the new VCK5000 has 3x better performance than earlier versions and delivers 2x TCO over Nvidia T4. AMD also showed favorable benchmarks against several Nvidia GPUs, claiming its VCK5000 achieved... Read more…

In Partnership with IBM, Canada to Get Its First Universal Quantum Computer

February 3, 2022

IBM today announced it will deploy its first quantum computer in Canada, putting Canada on a short list of countries that will have access to an IBM Quantum Sys Read more…

Supercomputer Simulations Show How Paxlovid, Pfizer’s Covid Antiviral, Works

February 3, 2022

Just about a month ago, Pfizer scored its second huge win of the pandemic when the U.S. Food and Drug Administration issued another emergency use authorization Read more…

Nvidia Launches Hopper H100 GPU, New DGXs and Grace Superchips

March 22, 2022

The battle for datacenter dominance keeps getting hotter. Today, Nvidia kicked off its spring GTC event with new silicon, new software and a new supercomputer. Speaking from a virtual environment in the Nvidia Omniverse 3D collaboration and simulation platform, CEO Jensen Huang introduced the new Hopper GPU architecture and the H100 GPU... Read more…

PsiQuantum’s Path to 1 Million Qubits

April 21, 2022

PsiQuantum, founded in 2016 by four researchers with roots at Bristol University, Stanford University, and York University, is one of a few quantum computing startups that’s kept a moderately low PR profile. (That’s if you disregard the roughly $700 million in funding it has attracted.) The main reason is PsiQuantum has eschewed the clamorous public chase for... Read more…

Nvidia Dominates MLPerf Inference, Qualcomm also Shines, Where’s Everybody Else?

April 6, 2022

MLCommons today released its latest MLPerf inferencing results, with another strong showing by Nvidia accelerators inside a diverse array of systems. Roughly fo Read more…

Leading Solution Providers

Contributors

D-Wave to Go Public with SPAC Deal; Expects ~$1.6B Market Valuation

February 8, 2022

Quantum computing pioneer D-Wave today announced plans to go public via a SPAC (special purpose acquisition company) mechanism. D-Wave will merge with DPCM Capital in a transaction expected to produce $340 million in cash and result in a roughly $1.6 billion initial market valuation. The deal is expected to be completed in the second quarter of 2022 and the new company will be traded on the New York Stock... Read more…

Intel Announces Falcon Shores CPU-GPU Combo Architecture for 2024

February 18, 2022

Intel held its 2022 investor meeting yesterday, covering everything from the imminent Sapphire Rapids CPUs to the hotly anticipated (and delayed) Ponte Vecchio GPUs. But somewhat buried in its summary of the meeting was a new namedrop: “Falcon Shores,” described as “a new architecture that will bring x86 and Xe GPU together into a single socket.” The reveal was... Read more…

Industry Consortium Forms to Drive UCIe Chiplet Interconnect Standard

March 2, 2022

A new industry consortium aims to establish a die-to-die interconnect standard – Universal Chiplet Interconnect Express (UCIe) – in support of an open chipl Read more…

India Launches Petascale ‘PARAM Ganga’ Supercomputer

March 8, 2022

Just a couple of weeks ago, the Indian government promised that it had five HPC systems in the final stages of installation and would launch nine new supercomputers this year. Now, it appears to be making good on that promise: the country’s National Supercomputing Mission (NSM) has announced the deployment of “PARAM Ganga” petascale supercomputer at Indian Institute of Technology (IIT)... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Facebook Parent Meta’s New AI Supercomputer Will Be ‘World’s Fastest’

January 24, 2022

Fresh off its rebrand last October, Meta (née Facebook) is putting muscle behind its vision of a metaversal future with a massive new AI supercomputer called the AI Research SuperCluster (RSC). Meta says that RSC will be used to help build new AI models, develop augmented reality tools, seamlessly analyze multimedia data and more. The supercomputer’s... Read more…

Nvidia Acquires Software-Defined Storage Provider Excelero

March 7, 2022

Nvidia has announced that it has acquired Excelero. The high-performance block storage provider, founded in 2014, will have its technology integrated into Nvidia’s enterprise software stack. Nvidia is not disclosing the value of the deal. Excelero’s core product, Excelero NVMesh, offers software-defined block storage via networked NVMe SSDs. NVMesh operates through... Read more…

Nvidia Announces ‘Eos’ Supercomputer

March 22, 2022

At GTC22 today, Nvidia unveiled its new H100 GPU, the first of its new ‘Hopper’ architecture, along with a slew of accompanying configurations, systems and Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire