Time’s up: nearly everyone agrees it’s about time to become serious about bringing security safeguards to high-performance computing systems, which has been largely ignored in the need for speed. A working group at the National Institute of Standards and Technology (NIST) last month published a high-performance computing security model that is a blueprint for operators to protect supercomputers from hacks and malicious actors.
Security has played second fiddle to horsepower in HPC systems as implementing security layers could slow down supercomputers. Operators typically want to squeeze the maximum performance out of systems. HPC users have also complained that system vendors do not prioritize security as system vendors are more interested in meeting performance benchmarks as stated in contracts.
The private and public sectors joined hands to create the HPC security blueprint, which covers hardware, software, storage and networking. “HPC is a large-scale, complex system with strict performance requirements. Security tools that are effective for individual devices may not work well in an HPC environment,” the document’s authors* stated.
The paper lays out a bare truth: performance is paramount in HPC, and operators will not adopt security measures if it impedes system performance.
HPC systems operate differently than conventional server installations. Installing a forensic tool to preserve a hard drive may make sense on a PC or server, but not on high-performance computers, the document states. Similarly, installing antivirus and scanning every incoming file may make sense on PCs, but not on high-performance computers.
The document defines the HPC computing model, and provides recommendations on how to secure systems. It also points to why HPC systems need security safeguards. Systems may be vulnerable as the unique hardware and software requirements for scientific experiments may not be well maintained compared to traditional computing environments.
“HPC can store large amounts of sensitive research data, personally identifiable information, and intellectual property that need to be safeguarded,” the document says.
The reference model has been adapted from security techniques used at MIT’s Lincoln Laboratory, which is a Department of Defense funded center. The model breaks HPC systems into four functional zones that can be secured separately. One zone is system access, the other covers CPUs and GPUs, the third covers storage, and the fourth covers software stack and system management tools.
Each of these zones have unique security requirements and need to be secured separately. While the zones aren’t isolated functionality, security calls are limited to the unique needs of each zone, and not across all nodes systemwide.
The “access zone” includes outside users logging into the system, authenticating users and authorizing their access to systems. Beyond sanitizing connections, the zone includes shell or web-based connections to access services and data transfers into the systems.
“The nodes and their software stacks in this zone are susceptible to external attacks, such as denial of service attacks, perimeter network scanning and sniffing, authentication attacks, user session hijacking, and machine-in-the-middle attacks,” the document states.
HPC operators, such as the University of Texas at Austin, use multifactor authentication to authorize users. Attendees at a security workshop at the SC22 trade show last year said that while two-factor authentication is a start, more can be done to protect the access zone.
The “management zone” includes the software side to get tasks done, including the provisioning, scheduling, virtualization, configuration and management of tasks.
“Only administrators with privileged access authorization are allowed to log into the management zone, where a privileged administrator logs into the access zone first and then logs into the management zone. A malicious user may attempt to log into the management zone,” the document said.
MIT has protected the management zone by getting rid of root access, which gave unfettered access to system resources to administrators. Instead, system administrators have root privileges through a shell command called “sudo,” that maintains an audit trail of activities by system administrators.
The access and management zones connect to the two hardware zones, where the computing is carried out.
The “high-performance computing zone” includes the compute nodes that run parallel computations, and the “data storage zone” includes the parallel file systems such as GPFS and Lustre-based PFS that store petabytes or exabytes of data, which are accessed regularly for computations.
“Protecting the confidentiality and integrity of user data is essential for the data storage zone. Data integrity can be compromised by malicious data deletion, corruption, pollution, or false data injection so gaining unauthorized privileged access is a major threat,” the document noted.
The high-performance computing zone could be vulnerable to side-channel attacks or firmware exploits, which have been affecting chips from Intel and AMD lately. Such attacks allow hackers to steal critical information and make changes in the boot layer that allows persistent access to supercomputers.
An annual security report published by Intel last month revealed that it had issued alerts for 30 BIOS and 21 CPU vulnerabilities. The exploits may also harm system performance, the NIST document stated.
The draft document is open for comments through April 7. It was published ahead of the 3rd High-Performance Computing Security Workshop in Rockville, Maryland, on March 15th and 16th, where further discussions on the topic will take place.
* Authors: Yang Guo (NIST), Ramaswamy Chandramouli (NIST), Lowell Wofford (Amazon.com), Rickey Gregg (HPCMP), Gary Key (HPCMP), Antwan Clark (Laboratory for Physical Sciences), Catherine Hinton (Los Alamos National Laboratory), Andrew Prout (MIT Lincoln Laboratory), Albert Reuther (MIT Lincoln Laboratory), Ryan Adamson (Oak Ridge National Laboratory), Aron Warren (Sandia National Laboratories), Purushotham Bangalore (University of Alabama), Erik Deumens (University of Florida), Csilla Farkas (University of South Carolina)