The Blueprint for the National Strategic Computing Reserve

By Oliver Peckham

October 12, 2021

Over the last year, the HPC community has been buzzing with the possibility of a National Strategic Computing Reserve (NSCR). An in-utero brainchild of the COVID-19 High-Performance Computing Consortium, an NSCR would serve as a Merchant Marine for urgent computing, marshaling the United States’ HPC resources to respond swiftly and powerfully to national and global crises. Now, the National Science and Technology Council (NSTC) has helped the idea take another step toward reality by releasing a “blueprint” for an NSCR.

The report was prepared by NSTC’s Subcommittee on Networking and Information Technology Research and Development and its Subcommittee on Future Advanced Computing Ecosystems, incorporating the responses to a request for information (RFI) issued by the National Science Foundation (NSF) and the White House Office of Science and Technology Policy (OSTP) last December that solicited “potential concepts and approaches for a National Strategic Computing Reserve.”


Learning from the COVID-19 HPC Consortium

The idea of the NSCR draws heavily on the experience of the COVID-19 HPC Consortium, which has supported over 100 projects through 43 members with an aggregate computing power exceeding 600 petaflops. “The positive outcomes and lessons learned from the COVID-19 HPC Consortium led to the conceptualization of the National Strategic Computing Reserve, envisioned as a coalition of experts and resource providers that can be mobilized quickly to provide critical computational resources—including compute resources, software, data, and technical expertise—in times of national or international urgent need,” the blueprint reads.

The current numbers of the COVID-19 HPC Consortium.

The COVID-19 HPC Consortium, the authors say, conveyed some key lessons: that leveraging existing processes speeds the collaborative process; that early engagement with the stakeholder community is critical; that a flexible intellectual property framework is important to ensuring impactful research; that, breaking from the COVID-19 HPC Consortium model, it is valuable to address more than fundamental research; and, perhaps most crucially, “substantial time and effort are required to make resources and services available to researchers” and, as a result, “it is critical to have a standing capability to support the proposal submission and review process, as well as coordination with service providers to provide the necessary access to resources and services[.]”

“While the Consortium has been successful and effective,” the authors conclude, “earlier coordination of priorities and reviews with NIH, FEMA, and CDC could have improved its effectiveness, particularly in the area of patient-level projects. … The ad hoc creation and operation of the Consortium had significant impacts on the workloads of the personnel involved as well as on the communities that are typically served by the resources that were diverted to address the pandemic. Moreover, the shift in focus of the resources to pandemic-related research delayed other [science and engineering] projects, putting on hold advances in the broader research ecosystem. As a result, there have been some potentially undesirable implications—for example, for long-term competitiveness—of diverting human and computing resources to emergency response.”


The blueprint for the NSCR

The authors combined this experience from the COVID-19 HPC Consortium with the responses to the RFI, which asked respondents about everything from how and when an NSCR should be activated to how an NSCR should engage in community outreach and communications. The RFI, they revealed, received responses from seven organizations and individuals, including the executive committee for the COVID-19 HPC Consortium; the Cybersecurity and Infrastructure Security Agency; HPE; Lawrence Livermore National Laboratory; and Rensselaer Polytechnic Institute, among others.

“The NSCR is envisioned as a coalition of experts and resource providers … spanning government, academia, nonprofits/foundations, and industry,” the authors write, “supported by appropriate coordination structures and mechanisms that can be mobilized quickly … The NSCR blueprint comprises volunteer subject-matter experts working with computing resource providers to make advanced computing and data resources and services available to respond to crises.”

The authors detail eight principal functions for an NSCR:

  • Establishing clear policies, processes, and procedures for activating and operating the NSCR in times of crisis;
  • Recruiting and sustaining a group of advanced computing and data resource and service provider members in government, industry, and academia;
  • Developing relevant agreements with members, including provisions for augmented capacity and/or cost reimbursement for deployable resources, for the urgent deployment of computing and supporting resources and services, and for provision of incentives for non-emergency participation;
  • Developing methods and tools for making critical proprietary datasets securely available to compute platforms and researchers when needed;
  • Developing a set of agreements to enable the NSCR to collaborate with Federal agencies and industries in preparation for and execution of NSCR deployments;
  • Executing a series of preparedness exercises with some recurring frequency to test and maintain the NSCR;
  • During a crisis,
    • Executing procedures to receive project proposals and review and prioritize projects and to allocate computing resources to approved projects;
    • Tracking project progress and disseminating products (including software and data) and outputs to ensure effective use and impact; and
    • Participating in the broader national response as an active partner; and
  • Following a crisis,
    • Managing the return to normal operations of the involved resources;
    • Implementing changes from post-crisis lessons learned; and
    • Documenting experiences and outcomes.

The NSCR, of course, would include a variety of resource providers supplying everything from access to supercomputers or cloud resources to access to software stacks and datasets. The authors say that it would be “important to work … to establish necessary data-sharing processes and policies to ensure that relevant datasets are available for exercise and training purposes as well as during crisis activations.” Further, they write, a “range of incentive and/or compensation mechanisms may be considered for resource providers,” such as funding additional capacity for systems conditionally on access to a much larger portion of the system during a crisis.


Next steps for the NSCR

To coordinate both the resource providers and the users, the authors write, “standing up an NSCR Program Office is recommended, to be the overarching entity for operating the NSCR. The Program Office will implement the principal functions [of the NSCR].” For instance, they say, the office would develop policies and criteria for activating the reserve, onboarding users and coordinating among agencies.

The report estimates that this program office would cost around two million dollars a year. Developing and deploying an integrated cyberinfrastructure platform to support the dynamic federation and distribution of resources across the stakeholders in the NSCR, meanwhile, is estimated at another two million dollars per year. These costs, of course, would be significantly added to by the costs of resource acquisition, which would be entirely dependent on the quantity of resources procured.

“Increasingly, the Nation’s computing infrastructure—and ready access by experts to this infrastructure, along with critical scientific and technical support in times of crisis—is critical to the Nation’s safety, security, and resiliency,” the authors conclude. “The Federal Government’s next steps to building on the blueprint include establishing an interagency group to conduct deeper dives into the various structural and operational components of the NCSR outlined in this document; organizing community events to explore the NSCR’s role in specific emergency scenarios; and establishing the requisite relationships with other reserves as well as other entities responsible for coordinating and responding to emergencies.”

Read More

National Strategic Computing Reserve: A Blueprint (full report)

The COVID-19 HPC Consortium Looks Ahead to a ‘National Strategic Computing Reserve’

Prepare to Pivot HPC Faster Before the Next Crisis

Between ‘COVID Cabinets’ and Consortia, Summit Isn’t Done with Pandemic Research

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Royalty-free stock illustration ID: 1675260034

Solving Heterogeneous Programming Challenges with SYCL

December 8, 2021

In the first of a series of guest posts on heterogenous computing, James Reinders, who returned to Intel last year after a short "retirement," considers how SYCL will contribute to a heterogeneous future for C++. Reinde Read more…

Quantinuum Debuts Quantum-based Cryptographic Key Service – Is this Quantum Advantage?

December 7, 2021

Quantinuum – the newly-named company resulting from the merger of Honeywell’s quantum computing division and UK-based Cambridge Quantum – today launched Quantum Origin, a service to deliver “completely unpredicta Read more…

SC21 Was Unlike Any Other — Was That a Good Thing?

December 3, 2021

For a long time, the promised in-person SC21 seemed like an impossible fever dream, the assurances of a prominent physical component persisting across years of canceled conferences, including two virtual ISCs and the virtual SC20. With the advent of the Delta variant, Covid surges in St. Louis and contention over vaccine requirements... Read more…

The Green500’s Crystal Anniversary Sees MN-3 Crystallize Its Winning Streak

December 2, 2021

“This is the 30th Green500,” said Wu Feng, custodian of the Green500 list, at the list’s SC21 birds-of-a-feather session. “You could say 15 years of Green500, which makes it, I guess, the crystal anniversary.” Indeed, HPCwire marked the 15th anniversary of the Green500 – which ranks supercomputers by flops-per-watt, rather than just by flops – earlier this year with... Read more…

AWS Arm-based Graviton3 Instances Now in Preview

December 1, 2021

Three years after unveiling the first generation of its AWS Graviton chip-powered instances in 2018, Amazon Web Services announced that the third generation of the processors – the AWS Graviton3 – will power all-new Amazon Elastic Compute 2 (EC2) C7g instances that are now available in preview. Debuting at the AWS re:Invent 2021... Read more…

AWS Solution Channel

Introducing AWS HPC Connector for NICE EnginFrame

HPC customers regularly tell us about their excitement when they’re starting to use the cloud for the first time. In conversations, we always want to dig a bit deeper to find out how we can improve those initial experiences and deliver on the potential they see. Read more…

Nvidia Dominates Latest MLPerf Results but Competitors Start Speaking Up

December 1, 2021

MLCommons today released its fifth round of MLPerf training benchmark results with Nvidia GPUs again dominating. That said, a few other AI accelerator companies participated and, one of them, Graphcore, even held a separ Read more…

Royalty-free stock illustration ID: 1675260034

Solving Heterogeneous Programming Challenges with SYCL

December 8, 2021

In the first of a series of guest posts on heterogenous computing, James Reinders, who returned to Intel last year after a short "retirement," considers how SYC Read more…

Quantinuum Debuts Quantum-based Cryptographic Key Service – Is this Quantum Advantage?

December 7, 2021

Quantinuum – the newly-named company resulting from the merger of Honeywell’s quantum computing division and UK-based Cambridge Quantum – today launched Q Read more…

SC21 Was Unlike Any Other — Was That a Good Thing?

December 3, 2021

For a long time, the promised in-person SC21 seemed like an impossible fever dream, the assurances of a prominent physical component persisting across years of canceled conferences, including two virtual ISCs and the virtual SC20. With the advent of the Delta variant, Covid surges in St. Louis and contention over vaccine requirements... Read more…

The Green500’s Crystal Anniversary Sees MN-3 Crystallize Its Winning Streak

December 2, 2021

“This is the 30th Green500,” said Wu Feng, custodian of the Green500 list, at the list’s SC21 birds-of-a-feather session. “You could say 15 years of Green500, which makes it, I guess, the crystal anniversary.” Indeed, HPCwire marked the 15th anniversary of the Green500 – which ranks supercomputers by flops-per-watt, rather than just by flops – earlier this year with... Read more…

Nvidia Dominates Latest MLPerf Results but Competitors Start Speaking Up

December 1, 2021

MLCommons today released its fifth round of MLPerf training benchmark results with Nvidia GPUs again dominating. That said, a few other AI accelerator companies Read more…

At SC21, Experts Ask: Can Fast HPC Be Green?

November 30, 2021

HPC is entering a new era: exascale is (somewhat) officially here, but Moore’s law is ending. Power consumption and other sustainability concerns loom over the enormous systems and chips of this new epoch, for both cost and compliance reasons. Reconciling the need to continue the supercomputer scale-up while reducing HPC’s environmental impacts... Read more…

Raja Koduri and Satoshi Matsuoka Discuss the Future of HPC at SC21

November 29, 2021

HPCwire's Managing Editor sits down with Intel's Raja Koduri and Riken's Satoshi Matsuoka in St. Louis for an off-the-cuff conversation about their SC21 experience, what comes after exascale and why they are collaborating. Koduri, senior vice president and general manager of Intel's accelerated computing systems and graphics (AXG) group, leads the team... Read more…

Jack Dongarra on SC21, the Top500 and His Retirement Plans

November 29, 2021

HPCwire's Managing Editor sits down with Jack Dongarra, Top500 co-founder and Distinguished Professor at the University of Tennessee, during SC21 in St. Louis to discuss the 2021 Top500 list, the outlook for global exascale computing, and what exactly is going on in that Viking helmet photo. Read more…

IonQ Is First Quantum Startup to Go Public; Will It be First to Deliver Profits?

November 3, 2021

On October 1 of this year, IonQ became the first pure-play quantum computing start-up to go public. At this writing, the stock (NYSE: IONQ) was around $15 and its market capitalization was roughly $2.89 billion. Co-founder and chief scientist Chris Monroe says it was fun to have a few of the company’s roughly 100 employees travel to New York to ring the opening bell of the New York Stock... Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

US Closes in on Exascale: Frontier Installation Is Underway

September 29, 2021

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, held by Zoom this week (Sept. 29-30), it was revealed that the Frontier supercomputer is currently being installed at Oak Ridge National Laboratory in Oak Ridge, Tenn. The staff at the Oak Ridge Leadership... Read more…

AMD Launches Milan-X CPU with 3D V-Cache and Multichip Instinct MI200 GPU

November 8, 2021

At a virtual event this morning, AMD CEO Lisa Su unveiled the company’s latest and much-anticipated server products: the new Milan-X CPU, which leverages AMD’s new 3D V-Cache technology; and its new Instinct MI200 GPU, which provides up to 220 compute units across two Infinity Fabric-connected dies, delivering an astounding 47.9 peak double-precision teraflops. “We're in a high-performance computing megacycle, driven by the growing need to deploy additional compute performance... Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Killer Instinct: AMD’s Multi-Chip MI200 GPU Readies for a Major Global Debut

October 21, 2021

AMD’s next-generation supercomputer GPU is on its way – and by all appearances, it’s about to make a name for itself. The AMD Radeon Instinct MI200 GPU (a successor to the MI100) will, over the next year, begin to power three massive systems on three continents: the United States’ exascale Frontier system; the European Union’s pre-exascale LUMI system; and Australia’s petascale Setonix system. Read more…

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

Leading Solution Providers

Contributors

D-Wave Embraces Gate-Based Quantum Computing; Charts Path Forward

October 21, 2021

Earlier this month D-Wave Systems, the quantum computing pioneer that has long championed quantum annealing-based quantum computing (and sometimes taken heat fo Read more…

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-ap Read more…

Three Chinese Exascale Systems Detailed at SC21: Two Operational and One Delayed

November 24, 2021

Details about two previously rumored Chinese exascale systems came to light during last week’s SC21 proceedings. Asked about these systems during the Top500 media briefing on Monday, Nov. 15, list author and co-founder Jack Dongarra indicated he was aware of some very impressive results, but withheld comment when asked directly if he had... Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer... Read more…

2021 Gordon Bell Prize Goes to Exascale-Powered Quantum Supremacy Challenge

November 18, 2021

Today at the hybrid virtual/in-person SC21 conference, the organizers announced the winners of the 2021 ACM Gordon Bell Prize: a team of Chinese researchers leveraging the new exascale Sunway system to simulate quantum circuits. The Gordon Bell Prize, which comes with an award of $10,000 courtesy of HPC pioneer Gordon Bell, is awarded annually... Read more…

Quantum Computer Market Headed to $830M in 2024

September 13, 2021

What is one to make of the quantum computing market? Energized (lots of funding) but still chaotic and advancing in unpredictable ways (e.g. competing qubit tec Read more…

IBM Introduces its First Power10-based Server, the Power E1080; Targets Hybrid Cloud

September 8, 2021

IBM today introduced the Power E1080 server, its first system powered by a Power10 IBM microprocessor. The new system reinforces IBM’s emphasis on hybrid cloud markets and the new chip beefs up its inference capabilities. IBM – like other CPU makers – is hoping to make inferencing a core capability... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire