Researchers at Oak Ridge National Laboratory (ORNL) and Intel Corporation have wrapped up a three year project aimed at giving users of Lustre, the Department of Energy’s preferred parallel file system, more flexibility. And the results are impressive.
Lustre is the preferred file system for the leadership scientific computing community for a simple reason: it has an unprecedented ability to store and retrieve the large-scale data inherent in complex scientific simulations such as those run at ORNL’s Leadership Computing Facility, home to Titan, the nation’s most powerful system for open science.
In the era of big data, however, there is no such thing as too much flexibility, and to bridge the gap between high-performance computing and today’s massive datasets, the ORNL/Intel team sought to modify the underlying Lustre code so that DOE’s trademark file system could better accommodate the data analytics workloads playing an increasingly important role in scientific discovery.
Their solution: Progressive File Layout (PFL), a novel storage scheme that absolves Lustre users of the responsibility of striping, or the method by which data is divided and stored across servers. The PFL approach gives users more opportunities to take advantage of Lustre’s highly scalable input/output performance, especially for big data-type workloads, and evolves Lustre to more easily facilitate large-scale datasets.
Specifically, PFL allows files to be striped dynamically depending on their size, at which point another striping scheme is implemented as the file size grows, and so on as the files surpass various size thresholds. “The scheme manages capacity based on layout and removes a significant responsibility from users,” said OLCF File Systems Team Lead Sarp Oral.
PFL is particularly relevant in the age of big data, as analytic and machine learning algorithms are increasingly read-heavy; i.e., they repeatedly read the same datasets, a process that creates “hot spots” in the form of requests repeatedly hitting the same resource. PFL paves the way for future Lustre enhancements to spread these requests out by creating replicas for the multiple requests, a process known as “file level replication,” therefore eliminating hot spots and improving the overall application I/O performance.
It’s a powerful modification—by giving users better flexibility on how to lay out their files, they can by extension take better advantage of Lustre’s unique capabilities without becoming parallel file system experts. And while Lustre has historically looked to underlying storage hardware for reliability, PFL also enables future developments aimed at providing reliability within the file system itself, allowing for the development of more scalable, efficient Lustre systems in less time with less money.
“While PFL is valuable on its own, it also enables future technology development by allowing Lustre to become an enterprise file system, expanding its use cases and marketability,” said Oral.
Large-scale testing of PFL was performed on Titan in June and was “very successful, with the new code showing significant improvement in file I/O performance,” said Neena Imam, deputy director of Collaborations for ORNL’s Computing and Computational Sciences Directorate. She added that despite its relative youth, “a stable PFL version is now available as of Lustre version 2.10 and has received significant attention in the Lustre community, which will benefit greatly from this addition.”
PFL is the result of a three year effort in which ORNL co-defined the architecture with Intel, oversaw the development efforts, and performed extensive testing.