Lustre Vendors Consider File System’s Future

By Nicole Hemsoth

November 22, 2011

After a near-death experience at the hands of Oracle, the Lustre file system and its place in high performance computing now seems assured. Vendors like Whamcloud, Xyratex, Terascala, Cray, DataDirect Networks, and others have created a critical mass of stakeholders that are joined together under OpenSFS, a non-profit organization committed to maintain Lustre as a viable, open technology for the entire HPC community.

So what’s next? We contacted three leading Lustre vendors about what may lie ahead for open source file system, asking Xyratex Storage Software Director Peter Bojanic, Whamcloud CEO Brent Gorda, and Terascala Marketing and Product Management VP Rick Friedman for their perspectives on what Lustre needs for broader commercial use as well as how it can make its way into the world of exascale supercomputing.

HPCwire: What do you think is needed to make Lustre usable for commercial HPC customers rather than just something accessible to the big supercomputing labs?

Peter Bojanic: Lustre is renowned for its scalability and performance, but is also known for its complexity of cluster design, deployment, and management. There has been an impressive cross section of commercial HPC deployments of Lustre ranging from oil companies to motion picture special effects companies but these early adopters faced steep learning curves and relied on Linux experts on staff.

For Lustre to succeed in the broader commercial HPC market three things are required:

1. Engineered solution configurations – A reliable Lustre system that achieves maximum performance from the underlying storage infrastructure requires engineering from the hardware all the way up through the software stack. Commercial HPC customers should seek proven solution configurations with a correspondingly tuned software stack.

2. Reliable deployment methodology – From configuring the hardware to installing the software and formatting the file system, commercial HPC customers require a solution that is completely ready to run and delivers expected performance.

3. Management tools – Lustre needs to approach the easy of management of enterprise commercial storage system for it to succeed in commercial HPC environments.

Brent Gorda: Lustre is already used widely in commercial HPC situations as well as many non-HPC situations. As an ex-big supercomputing lab guy, I am pleasantly surprised by the frequency of commercial contact we get about Lustre. The fact is that Lustre is just extremely competitive in a wide variety of commercial environments based on performance, efficiency, stability and, of course, price. Lustre is open source which results in a large number of technically astute users with high-end I/O needs.
 
Broader adoption by commercial HPC is predicated on a thriving ecosystem around Lustre. At Whamcloud, we have just announced a Lustre product called Chroma that will allow and encourage choice in providers and competition in product. This is key. A burgeoning ecosystem with lower barriers to entry — making Lustre accessible to users that have found Lustre “too hard” up to this point — will spread the benefits of Lustre into, and beyond, commercial HPC quickly.
 
Ease of use is a big factor. The emergence of Lustre appliances with GUI management interfaces like Chroma is a welcome thing. But it takes more than just slapping a GUI on top of Lustre. Deep understanding of the technology, its modes of failure and performance degradation gives Whamcloud a leg-up on providing a tool that meets the needs.

Features are also important. As Whamcloud’s current Lustre development contracts are fulfilled, enhancements such as multiple metadata servers and Hierarchical Storage Management (HSM) further fulfill needs of the enterprise market and increase the size of the community, benefiting us all.
 
All of these activities really underscore that fact that confidence and stability have returned to the community. No longer are we debating whether Lustre will survive. We’re debating how quickly it will spread into new markets. 
 
Rick Friedman: Lustre is deployed in commercial environments today and its usage is growing. Terascala has customers in the financial services, engineering services and life sciences industries who are successfully using Lustre to provide the throughput they need to analyze their data.
 
While we see Lustre continuing to make significant inroads into commercial environments, improving ease of use, reliability and data integrity will accelerate its adoption. In the commercial sector, Lustre has a reputation for complex setup and maintenance within production environments. The commercial users we talk with are excited about the throughput capabilities of the Lustre file system, but are concerned about the ongoing challenges of maintaining the system. At Terascala, we’ve been successfully working with customers to address those issues and customers clearly see the benefits.

As Lustre evolves, tools that improve management, installation, validation and scalability will accelerate growth in the commercial side of the market.

HPCwire: Can Lustre be made suitable for customers who are even less HPC savvy, for example, the so called “missing middle?”

Bojanic: Yes, improved integration of Lustre in the form of easy to deploy systems with robust management tools will help Lustre adoption in the ‘missing middle.’ The origins of Lustre were focused on solving problems of scale and performance. In the latter part of the past decade Sun invested primarily in quality, substantially improving Lustre’s reliability and robustness. The opportunity going forward is to continue to harness that power and making it accessible to a broader range of HPC environments and applications.

Gorda: Yes. Whamcloud is on the front lines here, actively working with our hardware partners to create Lustre appliances purpose-built to address this class of customer. These Lustre appliances provide a graphical management interface and empower typical Linux administrators to run the file system. At SC11 this year we publicly announced Chroma, the technology necessary to enable our partners to provide appliances.

Chroma has deep integration with Lustre — and is able to direct attention to issues and solve them for the administrator. Compared to the current state of administration Chroma significantly lowers the barrier to entry and directly addresses the “Lustre is too hard” issue.

The second major issue, echoing my answer from above, is the existence of a thriving ecosystem around Lustre. A burgeoning ecosystem with lower barriers to entry, solid hardware options and support options, will spread the benefits of Lustre into commercial HPC quickly.

Companies looking to scale their storage needs really should take a second look at Lustre for their next step.

Friedman: Terascala is actively and successfully working with the “missing middle” customers. But while Lustre can be an effective solution for the “missing middle,” developing solutions that target the segment, with tools and platform price points, will drive interest even higher.

Today, organizations in this market segment struggle with compute environments that are too large for existing non-parallel storage solutions and find that they can’t get the performance required to maximize server use. At Terascala, we have found that delivering a pre-configured, fully supported and easily managed appliance addresses the needs of this market. Most non-research customers are looking for complete solutions, not “build your own”; they simply don’t have the time or desire to build the expertise needed to get a fully functioning Lustre environment up and running themselves.

As a file system, Lustre is already more than suitable for this segment of the market. However, organizations are seeking Lustre solutions, not just a file system. Simply put, providing complete, supported solutions will accelerate the acceptance of Lustre for the “missing middle.”

HPCwire: At the other end of the spectrum, what needs to be done to move Lustre into the exascale realm?

Bojanic: One of the current limits for Lustre file systems is the single metadata server. There are opportunities, and plans, both for improving metadata performance on a single server, and for horizontally scaling to multiple metadata servers. In fact, the most significant new feature of Lustre 2.0 was an “under the hood” change in the metadata infrastructure to allow such horizontal scaling.

With increasing scale and component count also comes increasing failures rates. The ability to verify that application data correctly and safely lands on disk platters therefore assumes greater importance. Current end-to-end data integrity projects within Lustre aim to provide the reliability guarantees needed for the future.

Several other initiatives are underway to improve Lustre scalability and performance, including, among many others, support for larger I/O sizes, faster failover, continuous file system checking and repair, client QoS guarantees, and tiered storage. There’s no shortage of plans to help Lustre keep its title as the world scalability leader.

Gorda: As I wrote in a previous HPCwire article, Why Lustre Is Set to Excel in Exascale, Lustre is uniquely well suited for the exascale effort. File systems are a critical component of the modern supercomputing architectural model, and Lustre is open source, widely deployed, and has both a vibrant community and wide range of committed developers available to contribute in government, academia and enterprise.

Whamcloud is aiming to create a Lustre exascale “workbench,” effectively enabling interested academics around the world to experiment and contribute ideas for future file systems. By starting with proven, robust and mature technologies, it is possible to focus on the significant issues relating to exascale performance. What’s more, an open source solution already popular in the research community primes the research agenda to ensure the best talent is engaged and the best answers will emerge.

For those looking for more technical detail, it is clear that the POSIX interface will not be on the exascale path for the file system. In the Department of Energy Request For Proposal response, Whamcloud submitted an object container model we call the Distributed Application Object Store. We propose to expose the object store to the application schema, in the object-oriented sense, via a lightweight object format-aware layers.

What this means in simple terms is that an application that uses a data format such as HDF5, will access the file system via an HDF5 library interface. This provides scalable and direct access to the OSS’s in your file system without the current overhead and locking of the POSIX layer. The resulting objects exist alongside your current POSIX files as the items are in the same name space that we are familiar with now. This provides an obvious directory structure with a variety of these special object storage items as developed by middleware interests.

Friedman: While there are details in Lustre’s functionality that might enable organizations to squeeze additional performance, the biggest issue remains management. An exascale parallel file system will have thousands of drives, tens to hundreds of servers and controllers, miles of networking infrastructure, and multiple pieces of software running across the whole environment. Tools that give visibility into the total solution, that allow users to have an overall view and quickly diagnose issues, will enable Lustre to be successful in an exascale production environment. In our experience, it’s easy to get something to run once, but getting it to run consistently and reliably over time is the real challenge.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

South Africa CHPC: Home Grown Dynasty

October 22, 2018

Before the build up to the final event in the 2018 Student Cluster Competition season (the SC18 competition in Dallas), I want to take a moment to write about one of the great inspirational stories of these competitions. Read more…

By Dan Olds

NSF Launches Quantum Computing Faculty Fellows Program

October 22, 2018

Efforts to expand quantum computing research capacity continue to accelerate. The National Science Foundation today announced a Quantum Computing & Information Science Faculty Fellows (QCIS-FF) program aimed at devel Read more…

By John Russell

Democratization of HPC Part 3: Ninth Graders Tap HPC in the Cloud to Design Flying Boats

October 18, 2018

This is the third in a series of articles demonstrating the growing acceptance of high-performance computing (HPC) in new user communities and application areas. In this article we present UberCloud use case #208 on how Read more…

By Wolfgang Gentzsch and Håkon Bull Hove

HPE Extreme Performance Solutions

One Small Step Toward Mars: One Giant Leap for Supercomputing

Since the days of the Space Race between the U.S. and the former Soviet Union, we have continually sought ways to perform experiments in space. Read more…

IBM Accelerated Insights

Join IBM at SC18 and Learn to Harness the Next Generation of AI-focused Supercomputing

Blurring the lines between HPC and AI

Today’s high performance computers are helping clients gain insights at an unprecedented pace. The intersection of artificial intelligence (AI) and HPC can transform industries while solving some of the world’s toughest challenges. Read more…

Penguin Computing Launches Consultancy for Piecing AI Strategies Together

October 18, 2018

AI stands before the HPC industry as a beacon of great expectations, yet market research repeatedly shows that AI adoption is commonly stuck in the talking phase, on the near side of a difficult chasm to cross. In respon Read more…

By Tiffany Trader

South Africa CHPC: Home Grown Dynasty

October 22, 2018

Before the build up to the final event in the 2018 Student Cluster Competition season (the SC18 competition in Dallas), I want to take a moment to write about o Read more…

By Dan Olds

Penguin Computing Launches Consultancy for Piecing AI Strategies Together

October 18, 2018

AI stands before the HPC industry as a beacon of great expectations, yet market research repeatedly shows that AI adoption is commonly stuck in the talking phas Read more…

By Tiffany Trader

When Water Quality—Not Quantity—Hinders HPC Cooling

October 18, 2018

Attention has been paid to the sheer quantity of water consumed by supercomputers’ cooling towers – and rightly so, as they can require thousands of gallons per minute to cool. But in the background, another factor can emerge, bottlenecking efficiency and raising costs: water quality. Read more…

By Oliver Peckham

Paper Offers ‘Proof’ of Quantum Advantage on Some Problems

October 18, 2018

Is quantum computing worth all the effort being poured into it or should we just wait for classical computing to catch up? An IBM blog today posed those questio Read more…

By John Russell

Dell EMC to Supply U Michigan’s Great Lakes Cluster

October 16, 2018

The University of Michigan (U-M) today announced Dell EMC is the lead vendor for U-M’s $4.8 million Great Lakes HPC cluster scheduled for deployment in first Read more…

By John Russell

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

Nvidia Platform Pushes GPUs into Machine Learning, High Performance Data Analytics

October 10, 2018

GPU leader Nvidia, generally associated with deep learning, autonomous vehicles and other higher-end enterprise and scientific workloads (and gaming, of course) Read more…

By Doug Black

Federal Investment in Exascale – What It Really Means

October 10, 2018

Earlier this month, the EuroHPC JU (Joint Undertaking) reached critical mass, and it seems all EU and affiliated member states, bar the UK (unsurprisingly), have or will sign on. The EuroHPC JU was born from a recognition that individual EU member states, and the EU as a whole, were significantly underinvesting in HPC compared to the US, China and Japan, who all have their own exascale investment and delivery strategies (NSCI, 13th 5 Year Plan, Post-K, etc). Read more…

By Dairsie Latimer

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that Read more…

By John Russell

Leading Solution Providers

HPC on Wall Street 2018 Booth Video Tours Playlist

Arista

Dell EMC

IBM

Intel

RStor

VMWare

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

GPUs Power Five of World’s Top Seven Supercomputers

June 25, 2018

The top 10 echelon of the newly minted Top500 list boasts three powerful new systems with one common engine: the Nvidia Volta V100 general-purpose graphics proc Read more…

By Tiffany Trader

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

Aerodynamic Simulation Reveals Best Position in a Peloton of Cyclists

July 5, 2018

Eindhoven University of Technology (TU/e) and KU Leuven research group conducts the largest numerical simulation ever done in the sport industry and cycling discipline. The goal was to understand the aerodynamic interactions in the peloton, i.e., the main pack of cyclists in a race. Read more…

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This