Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre

By Pete Beckman

February 15, 2018

Editor’s note: The Meltdown and Spectre vulnerabilities have spawned community-wide discussion about how to best satisfy the twin, but often competing, mandates for performance and security. In this position paper, Pete Beckman presents a high-level architecture view of how supercomputers and their infrastructure designs could be modified to solve these kinds of issues.

The Meltdown and Spectre vulnerabilities are proving difficult to fix, and initial experiments suggest security patches will cause significant performance penalties to HPC applications. Even as these patches are rolled out to current HPC platforms, it might be helpful to explore how future HPC systems could be better insulated from CPU or operating system security flaws that could cause massive disruptions. Surprisingly, most of the core concepts to build supercomputers that are resistant to a wide range of threats have already been invented and deployed in HPC systems over the past 20 years. Combining these technologies, concepts, and approaches not only would improve cybersecurity but also would have broader benefits for improving HPC performance, developing scientific software, adopting advanced hardware such as neuromorphic chips, and building easy-to-deploy data and analysis services. This new form of “Fluid HPC” would do more than solve current vulnerabilities. As an enabling technology, Fluid HPC would be transformative, dramatically improving extreme-scale code development in the same way that virtual machine and container technologies made cloud computing possible and built a new industry.

In today’s extreme-scale platforms, compute nodes are essentially embedded computing devices that are given to a specific user during a job and then cleaned up and provided to the next user and job. This “space-sharing” model, where the supercomputer is divided up and shared by doling out whole nodes to users, has been common for decades. Several non-HPC research projects over the years have explored providing whole nodes, as raw hardware, to applications. In fact, the cloud computing industry uses software stacks to support this “bare-metal provisioning” model, and Ethernet switch vendors have also embraced the functionality required to support this model. Several classic supercomputers, such as the Cray T3D and the IBM Blue Gene/P, provided nodes to users in a lightweight and fluid manner. By carefully separating the management of compute node hardware from the software executed on those nodes, an out-of-band control system can provide many benefits, from improved cybersecurity to shorter Exascale Computing Project (ECP) software development cycles.

Updating HPC architectures and system software to provide Fluid HPC must be done carefully. In some places, changes to the core management infrastructure are needed. However, many of the component technologies were invented more than a decade ago or simply need updating. Three key architectural modifications are required.

  1. HPC storage services and parallel I/O systems must be updated to use modern, token-based authentication. For many years, web-based services have used standardized technologies like OAuth to provide safe access to sensitive data, such as medical and financial records. Such technologies are at the core of many single-sign-on services that we use for official business processes. These token-based methods allow clients to connect to storage services and read and write data by presenting the appropriate token, rather than, for example, relying on client-side credentials and access from restricted network ports. Some data services, such as Globus, MongoDB, and Spark, have already shifted to allow token-based authentication. As a side effect, this update to HPC infrastructure would permit DOE research teams to fluidly and easily configure new storage and data services, both locally or remotely, without needing special administration privileges. In the same way that a website such as OpenTable.com can accept Facebook or Google user credentials, an ECP data team could create a new service that easily accepted NERSC or ALCF credentials. Moving to modern token-based authentication will improve cybersecurity, too; compromised compute nodes would not be able to read another user’s data. Rather, they would have access only to the areas for which an authentication token had been provided by the out-of-band system management layer.
  2. HPC interconnects must be updated to integrate technology from software-defined networking (SDN). OpenFlow, an SDN standard, is already implemented in many commercial Ethernet switches. SDN allows massive data cloud computing providers such as Google, Facebook, Amazon, and Microsoft to manage and separate traffic within a data center, preventing proprietary data from flowing past nodes that could be maliciously snooping. A compromised node must be prevented from snooping other traffic or spoofing other nodes. Essentially, SDN decouples the control plane and data movement from the physical and logical configuration. Updating the HPC interconnect technology to use SDN technologies would provide improved cybersecurity and also isolate errant HPC programs from interfering or conflicting with other jobs. With SDN technology, a confused MPI process would not be able to send data to another user’s node, because the software-defined network for the user, configured by the external system management layer, would not route the traffic to unconfirmed destinations.
  3. Compute nodes must be efficiently reinitialized, clearing local state between user jobs. Many HPC platforms were designed to support rebooting and recycling compute nodes between jobs. Decades ago, netbooting Beowulf clusters was common. By quickly reinitializing a node and carefully clearing previous memory state, data from one job cannot be leaked to another. Without this technique, a security vulnerability that escalates privilege permits a user to look at data left on the node from the previous job and leave behind malware to watch future jobs. Restarting nodes before each job improves system reliability, too. While rebooting sounds simple, however, guaranteeing that RAM and even NVRAM is clean between reboots might require advanced techniques. Fortunately, several CPU companies have been adding memory encryption engines, and NVRAM producers have added similar features; purging the ephemeral encryption key is equivalent to clearing memory. This feature is used to instantly wipe modern smartphones, such as Apple’s iPhone. Wiping state between users can provide significant improvements to security and productivity.

These three foundational architectural improvements to create a Fluid HPC system must be connected into an improved external system management layer. That layer would “wire up” the software-defined network for the user’s job, hand out storage system authentication tokens, and push a customized operating system or software stack onto the bare-metal provisioned hardware. Modern cloud-based data centers and their software communities have engineered a wide range of technologies to fluidly manage and deploy platforms and applications. The concepts and technologies in projects such as OpenStack, Kubernetes, Mesos, and Docker Swarm can be leveraged for extreme-scale computing without hindering performance. In fact, experimental testbeds such as the Chameleon cluster at the University of Chicago and the Texas Advanced Computing Center have already put some of these concepts into practice and would be an ideal location to test and develop a prototype of Fluid HPC.

These architectural changes make HPC platforms programmable again. The software-defined everything movement is fundamentally about programmable infrastructure. Retooling our systems to enable Fluid HPC with what is essentially a collection of previously discovered concepts, rebuilt with today’s technology, will make our supercomputers programmable in new ways and have a dramatic impact on HPC software development.

  1. Meltdown and Spectre would cause no performance degradation on Fluid HPC systems. In Fluid HPC, compute nodes are managed as embedded systems. Nodes are given completely to users, in exactly the way many hero programmers have been begging for years. The security perimeter around an embedded system leverages different cybersecurity techniques. The CPU flaws that gave us Meltdown and Spectre can be isolated by using the surrounding control system, rather than adding performance- squandering patches to the node. Overall cybersecurity will improve by discarding the weak protections in compute nodes and building security into the infrastructure instead.
  2. Extreme-scale platforms would immediately become the world’s largest software testbeds. Currently, testing new memory management techniques or advanced data and analysis services is nearly impossible on today’s large DOE platforms. Without the advanced controls and out-of-band management provided by Fluid HPC, system operators have no practical method to manage experimental software on production systems. Furthermore, without token-based authentication to storage systems and careful network management to prevent accidental or mischievous malformed network data, new low-level components can cause system instability. By addressing these issues with Fluid HPC, the world’s largest platforms could be immediately used to test and develop novel computer science research and completely new software stacks on a per job basis.
  3. Extreme-scale software development would be easier and faster. For the same reason that the broader software development world is clamoring to use container technologies such as Docker to make writing software easier and more deployable, giving HPC code developers Fluid HPC systems would be a disruptive improvement to software development. Coders could quickly test deploy any change to the software stack on a per-job basis. They could even use machine learning to automatically explore and tune software stacks and parameters. They could ship those software stack modifications across the ocean in an instant, to be tried by collaborators running code on other Fluid HPC systems. Easy performance regression testing would be possible. The ECP community could package software simply. We can even imagine running Amazon-style lambda functions on HPC infrastructure. In short, the HPC community would develop software just as the rest of the world does.
  4. The HPC community could easily develop and deploy new experimental data and analysis services. Deploying an experimental data service or file system is extremely difficult. Currently, there are no common, practical methods for developers to submit a job to a set of file servers with attached storage in order to create a new parallel I/O system, and then give permission to compute jobs to connect and use the service. Likewise, HPC operators cannot easily test deploy new versions of storage services against particular user applications. With the Fluid HPC model, however, a user could instantly create a memcached-based storage service, MongoDB, or Spark cluster on a few thousand compute nodes. Fluid HPC would make the infrastructure programmable; the impediments users now face deploying big data applications on big iron would be eliminated.
  5. Fluid HPC would enable novel, improved HPC architectures. With intelligent and programmable system management layers, modern authentication, software-defined networks, and dynamic software stacks provided by the basic platform, new types of accelerators—from neuromorphic to FPGAs—could be quickly added to Fluid HPC platforms. These new devices could be integrated as a set of disaggregated network-attached resources or attached to CPUs without needing to support multiuser and kernel protections. For example, neuromorphic accelerators could be quickly added without the need to support memory protection or multiuser interfaces. Furthermore, the low-level software stack could jettison the unneeded protection layers, permission checks, and security policies in the node operating system.

It is time for the HPC community to redesign how we manage and deploy software and operate extreme-scale platforms. Computer science concepts are often rediscovered or modernized years after being initially prototyped. Many classic concepts can be recombined and improved with technologies already deployed in the world’s largest data centers to enable Fluid HPC. In exchange, users would receive improved flexibility and faster software development—a supercomputer that not only runs programs but is programmable. Users would have choices and could adapt their code to any software stack or big data service that meets their needs. System operators would be able to improve security, isolation, and the rollout of new software components. Fluid HPC would enable the convergence of HPC and big data infrastructures and radically improve the environments for HPC software development. Furthermore, if Moore’s law is indeed slowing and a technology to replace CMOS is not ready, the extreme flexibility of Fluid HPC would speed the integration of novel architectures while also improving cybersecurity.

It’s hard to thank Meltdown and Spectre for kicking the HPC community into action, but we should nevertheless take the opportunity to aggressively pursue Fluid HPC and reshape our software tools and management strategies.

*Acknowledgments: I thank Micah Beck, Andrew Chien, Ian Foster, Bill Gropp, Kamil Iskra, Kate Keahey, Arthur Barney Maccabe, Marc Snir, Swann Peranau, Dan Reed, and Rob Ross for providing feedback and brainstorming on this topic.

About the Author

Pete Beckman

Pete Beckman is the co-director of the Northwestern University / Argonne Institute for Science and Engineering and designs, builds, and deploys software and hardware for advanced computing systems. When Pete was the director of the Argonne Leadership Computing Facility he led the team that deployed the world’s largest supercomputer for open science research. He has also designed and built massive distributed computing systems. As chief architect for the TeraGrid, Pete oversaw the team that built the world’s most powerful Grid computing system for linking production HPC centers for the National Science Foundation. He coordinates the collaborative research activities in extreme-scale computing between the US Department of Energy (DOE) and Japan’s ministry of education, science, and technology and leads the operating system and run-time software research project for Argo, a DOE Exascale Computing Project. As founder and leader of the Waggle project for smart sensors and edge computing, he is designing the hardware platform and software architecture used by the Chicago Array of Things project to deploy hundreds of sensors in cities, including Chicago, Portland, Seattle, Syracuse, and Detroit. Dr. Beckman has a Ph.D. in computer science from Indiana University (1993) and a BA in Computer Science, Physics, and Math from Anderson University (1985).

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

What’s New in HPC Research: October (Part 2)

October 15, 2018

In this bimonthly feature, HPCwire will highlight newly published research in the high-performance computing community and related domains. From exascale to quantum computing, the details are here. Check back on the firs Read more…

By Oliver Peckham

Building a Diverse Workforce for Next-Generation Analytics and AI

October 15, 2018

High-performance computing (HPC) has a well-known diversity problem, and groups such as Women in HPC are working to address it. But while the diversity challenge crosses the science and technology spectrum, it is especia Read more…

By Jan Rowell

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas monster, which would be a first, but at a spec'd 250 single-pre Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

One Small Step Toward Mars: One Giant Leap for Supercomputing

Since the days of the Space Race between the U.S. and the former Soviet Union, we have continually sought ways to perform experiments in space. Read more…

IBM Accelerated Insights

Nvidia, Oracle Expand Cloud GPU Ties for AI, HPC

October 11, 2018

Oracle is collaborating with Nvidia to bring the GPU leader’s unified AI and HPC platform to the public cloud for accelerating analytics and machine learning workloads. The move makes Oracle the first public cloud vendor to support Nvidia’s HGX-2 platform, the partners said this week. Read more…

By George Leopold

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

Nvidia Platform Pushes GPUs into Machine Learning, High Performance Data Analytics

October 10, 2018

GPU leader Nvidia, generally associated with deep learning, autonomous vehicles and other higher-end enterprise and scientific workloads (and gaming, of course) Read more…

By Doug Black

Federal Investment in Exascale – What It Really Means

October 10, 2018

Earlier this month, the EuroHPC JU (Joint Undertaking) reached critical mass, and it seems all EU and affiliated member states, bar the UK (unsurprisingly), have or will sign on. The EuroHPC JU was born from a recognition that individual EU member states, and the EU as a whole, were significantly underinvesting in HPC compared to the US, China and Japan, who all have their own exascale investment and delivery strategies (NSCI, 13th 5 Year Plan, Post-K, etc). Read more…

By Dairsie Latimer

NERSC-9 Clues Found in NERSC 2017 Annual Report

October 8, 2018

If you’re eager to find out who’ll supply NERSC’s next-gen supercomputer, codenamed NERSC-9, here’s a project update to tide you over until the winning bid and system details are revealed. The upcoming system is referenced several times in the recently published 2017 NERSC annual report. Read more…

By Tiffany Trader

DDN, Nvidia Blueprint Unified AI Appliance with Up to 9 DGX-1s

October 4, 2018

Continuing the roll-out of the A3I (Accelerated, Any-Scale AI) storage strategy kicked off in June, DDN today announced a new set of solutions that combine the Read more…

By Tiffany Trader

D-Wave Is Latest to Offer Quantum Cloud Platform

October 4, 2018

D-Wave Systems today launched its cloud platform for quantum computing – Leap – which combines a development environment, community features, and "real-time Read more…

By John Russell

Rise of the Machines – Clarion Call on AI by U.S. House Subcommittee

October 2, 2018

Last week, the top U.S. House of Representatives subcommittee on IT weighed in on AI with a new report - Rise of the Machines: Artificial Intelligence and its Growing Impact on U.S. Policy. Read more…

By John Russell

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

AMD’s EPYC Road to Redemption in Six Slides

June 21, 2018

A year ago AMD returned to the server market with its EPYC processor line. The earth didn’t tremble but folks took notice. People remember the Opteron fondly Read more…

By John Russell

Leading Solution Providers

HPC on Wall Street 2018 Booth Video Tours Playlist

Arista

Dell EMC

IBM

Intel

RStor

VMWare

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that Read more…

By John Russell

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

GPUs Power Five of World’s Top Seven Supercomputers

June 25, 2018

The top 10 echelon of the newly minted Top500 list boasts three powerful new systems with one common engine: the Nvidia Volta V100 general-purpose graphics proc Read more…

By Tiffany Trader

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

Aerodynamic Simulation Reveals Best Position in a Peloton of Cyclists

July 5, 2018

Eindhoven University of Technology (TU/e) and KU Leuven research group conducts the largest numerical simulation ever done in the sport industry and cycling discipline. The goal was to understand the aerodynamic interactions in the peloton, i.e., the main pack of cyclists in a race. Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This