Lustre Vendors Consider File System’s Future

By Nicole Hemsoth

November 22, 2011

After a near-death experience at the hands of Oracle, the Lustre file system and its place in high performance computing now seems assured. Vendors like Whamcloud, Xyratex, Terascala, Cray, DataDirect Networks, and others have created a critical mass of stakeholders that are joined together under OpenSFS, a non-profit organization committed to maintain Lustre as a viable, open technology for the entire HPC community.

So what’s next? We contacted three leading Lustre vendors about what may lie ahead for open source file system, asking Xyratex Storage Software Director Peter Bojanic, Whamcloud CEO Brent Gorda, and Terascala Marketing and Product Management VP Rick Friedman for their perspectives on what Lustre needs for broader commercial use as well as how it can make its way into the world of exascale supercomputing.

HPCwire: What do you think is needed to make Lustre usable for commercial HPC customers rather than just something accessible to the big supercomputing labs?

Peter Bojanic: Lustre is renowned for its scalability and performance, but is also known for its complexity of cluster design, deployment, and management. There has been an impressive cross section of commercial HPC deployments of Lustre ranging from oil companies to motion picture special effects companies but these early adopters faced steep learning curves and relied on Linux experts on staff.

For Lustre to succeed in the broader commercial HPC market three things are required:

1. Engineered solution configurations – A reliable Lustre system that achieves maximum performance from the underlying storage infrastructure requires engineering from the hardware all the way up through the software stack. Commercial HPC customers should seek proven solution configurations with a correspondingly tuned software stack.

2. Reliable deployment methodology – From configuring the hardware to installing the software and formatting the file system, commercial HPC customers require a solution that is completely ready to run and delivers expected performance.

3. Management tools – Lustre needs to approach the easy of management of enterprise commercial storage system for it to succeed in commercial HPC environments.

Brent Gorda: Lustre is already used widely in commercial HPC situations as well as many non-HPC situations. As an ex-big supercomputing lab guy, I am pleasantly surprised by the frequency of commercial contact we get about Lustre. The fact is that Lustre is just extremely competitive in a wide variety of commercial environments based on performance, efficiency, stability and, of course, price. Lustre is open source which results in a large number of technically astute users with high-end I/O needs.
 
Broader adoption by commercial HPC is predicated on a thriving ecosystem around Lustre. At Whamcloud, we have just announced a Lustre product called Chroma that will allow and encourage choice in providers and competition in product. This is key. A burgeoning ecosystem with lower barriers to entry — making Lustre accessible to users that have found Lustre “too hard” up to this point — will spread the benefits of Lustre into, and beyond, commercial HPC quickly.
 
Ease of use is a big factor. The emergence of Lustre appliances with GUI management interfaces like Chroma is a welcome thing. But it takes more than just slapping a GUI on top of Lustre. Deep understanding of the technology, its modes of failure and performance degradation gives Whamcloud a leg-up on providing a tool that meets the needs.

Features are also important. As Whamcloud’s current Lustre development contracts are fulfilled, enhancements such as multiple metadata servers and Hierarchical Storage Management (HSM) further fulfill needs of the enterprise market and increase the size of the community, benefiting us all.
 
All of these activities really underscore that fact that confidence and stability have returned to the community. No longer are we debating whether Lustre will survive. We’re debating how quickly it will spread into new markets. 
 
Rick Friedman: Lustre is deployed in commercial environments today and its usage is growing. Terascala has customers in the financial services, engineering services and life sciences industries who are successfully using Lustre to provide the throughput they need to analyze their data.
 
While we see Lustre continuing to make significant inroads into commercial environments, improving ease of use, reliability and data integrity will accelerate its adoption. In the commercial sector, Lustre has a reputation for complex setup and maintenance within production environments. The commercial users we talk with are excited about the throughput capabilities of the Lustre file system, but are concerned about the ongoing challenges of maintaining the system. At Terascala, we’ve been successfully working with customers to address those issues and customers clearly see the benefits.

As Lustre evolves, tools that improve management, installation, validation and scalability will accelerate growth in the commercial side of the market.

HPCwire: Can Lustre be made suitable for customers who are even less HPC savvy, for example, the so called “missing middle?”

Bojanic: Yes, improved integration of Lustre in the form of easy to deploy systems with robust management tools will help Lustre adoption in the ‘missing middle.’ The origins of Lustre were focused on solving problems of scale and performance. In the latter part of the past decade Sun invested primarily in quality, substantially improving Lustre’s reliability and robustness. The opportunity going forward is to continue to harness that power and making it accessible to a broader range of HPC environments and applications.

Gorda: Yes. Whamcloud is on the front lines here, actively working with our hardware partners to create Lustre appliances purpose-built to address this class of customer. These Lustre appliances provide a graphical management interface and empower typical Linux administrators to run the file system. At SC11 this year we publicly announced Chroma, the technology necessary to enable our partners to provide appliances.

Chroma has deep integration with Lustre — and is able to direct attention to issues and solve them for the administrator. Compared to the current state of administration Chroma significantly lowers the barrier to entry and directly addresses the “Lustre is too hard” issue.

The second major issue, echoing my answer from above, is the existence of a thriving ecosystem around Lustre. A burgeoning ecosystem with lower barriers to entry, solid hardware options and support options, will spread the benefits of Lustre into commercial HPC quickly.

Companies looking to scale their storage needs really should take a second look at Lustre for their next step.

Friedman: Terascala is actively and successfully working with the “missing middle” customers. But while Lustre can be an effective solution for the “missing middle,” developing solutions that target the segment, with tools and platform price points, will drive interest even higher.

Today, organizations in this market segment struggle with compute environments that are too large for existing non-parallel storage solutions and find that they can’t get the performance required to maximize server use. At Terascala, we have found that delivering a pre-configured, fully supported and easily managed appliance addresses the needs of this market. Most non-research customers are looking for complete solutions, not “build your own”; they simply don’t have the time or desire to build the expertise needed to get a fully functioning Lustre environment up and running themselves.

As a file system, Lustre is already more than suitable for this segment of the market. However, organizations are seeking Lustre solutions, not just a file system. Simply put, providing complete, supported solutions will accelerate the acceptance of Lustre for the “missing middle.”

HPCwire: At the other end of the spectrum, what needs to be done to move Lustre into the exascale realm?

Bojanic: One of the current limits for Lustre file systems is the single metadata server. There are opportunities, and plans, both for improving metadata performance on a single server, and for horizontally scaling to multiple metadata servers. In fact, the most significant new feature of Lustre 2.0 was an “under the hood” change in the metadata infrastructure to allow such horizontal scaling.

With increasing scale and component count also comes increasing failures rates. The ability to verify that application data correctly and safely lands on disk platters therefore assumes greater importance. Current end-to-end data integrity projects within Lustre aim to provide the reliability guarantees needed for the future.

Several other initiatives are underway to improve Lustre scalability and performance, including, among many others, support for larger I/O sizes, faster failover, continuous file system checking and repair, client QoS guarantees, and tiered storage. There’s no shortage of plans to help Lustre keep its title as the world scalability leader.

Gorda: As I wrote in a previous HPCwire article, Why Lustre Is Set to Excel in Exascale, Lustre is uniquely well suited for the exascale effort. File systems are a critical component of the modern supercomputing architectural model, and Lustre is open source, widely deployed, and has both a vibrant community and wide range of committed developers available to contribute in government, academia and enterprise.

Whamcloud is aiming to create a Lustre exascale “workbench,” effectively enabling interested academics around the world to experiment and contribute ideas for future file systems. By starting with proven, robust and mature technologies, it is possible to focus on the significant issues relating to exascale performance. What’s more, an open source solution already popular in the research community primes the research agenda to ensure the best talent is engaged and the best answers will emerge.

For those looking for more technical detail, it is clear that the POSIX interface will not be on the exascale path for the file system. In the Department of Energy Request For Proposal response, Whamcloud submitted an object container model we call the Distributed Application Object Store. We propose to expose the object store to the application schema, in the object-oriented sense, via a lightweight object format-aware layers.

What this means in simple terms is that an application that uses a data format such as HDF5, will access the file system via an HDF5 library interface. This provides scalable and direct access to the OSS’s in your file system without the current overhead and locking of the POSIX layer. The resulting objects exist alongside your current POSIX files as the items are in the same name space that we are familiar with now. This provides an obvious directory structure with a variety of these special object storage items as developed by middleware interests.

Friedman: While there are details in Lustre’s functionality that might enable organizations to squeeze additional performance, the biggest issue remains management. An exascale parallel file system will have thousands of drives, tens to hundreds of servers and controllers, miles of networking infrastructure, and multiple pieces of software running across the whole environment. Tools that give visibility into the total solution, that allow users to have an overall view and quickly diagnose issues, will enable Lustre to be successful in an exascale production environment. In our experience, it’s easy to get something to run once, but getting it to run consistently and reliably over time is the real challenge.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Researchers Scale COSMO Climate Code to 4888 GPUs on Piz Daint

October 17, 2017

Effective global climate simulation, sorely needed to anticipate and cope with global warming, has long been computationally challenging. Two of the major obstacles are the needed resolution and prolonged time to compute Read more…

By John Russell

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Cluster Competition coverage has come to its natural home: H Read more…

By Dan Olds

UCSD Web-based Tool Tracking CA Wildfires Generates 1.5M Views

October 16, 2017

Tracking the wildfires raging in northern CA is an unpleasant but necessary part of guiding efforts to fight the fires and safely evacuate affected residents. One such tool – Firemap – is a web-based tool developed b Read more…

By John Russell

HPE Extreme Performance Solutions

Transforming Genomic Analytics with HPC-Accelerated Insights

Advancements in the field of genomics are revolutionizing our understanding of human biology, rapidly accelerating the discovery and treatment of genetic diseases, and dramatically improving human health. Read more…

Exascale Imperative: New Movie from HPE Makes a Compelling Case

October 13, 2017

Why is pursuing exascale computing so important? In a new video – Hewlett Packard Enterprise: Eighteen Zeros – four HPE executives, a prominent national lab HPC researcher, and HPCwire managing editor Tiffany Trader Read more…

By John Russell

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Read more…

By Dan Olds

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan Read more…

By John Russell

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Debuts Programmable Acceleration Card

October 5, 2017

With a view toward supporting complex, data-intensive applications, such as AI inference, video streaming analytics, database acceleration and genomics, Intel i Read more…

By Doug Black

OLCF’s 200 Petaflops Summit Machine Still Slated for 2018 Start-up

October 3, 2017

The Department of Energy’s planned 200 petaflops Summit computer, which is currently being installed at Oak Ridge Leadership Computing Facility, is on track t Read more…

By John Russell

US Exascale Program – Some Additional Clarity

September 28, 2017

The last time we left the Department of Energy’s exascale computing program in July, things were looking very positive. Both the U.S. House and Senate had pas Read more…

By Alex R. Larzelere

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Leading Solution Providers

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

Intel, NERSC and University Partners Launch New Big Data Center

August 17, 2017

A collaboration between the Department of Energy’s National Energy Research Scientific Computing Center (NERSC), Intel and five Intel Parallel Computing Cente Read more…

By Linda Barney

  • arrow
  • Click Here for More Headlines
  • arrow
Share This