This week the National Center for Supercomputing Applications (NCSA) confirmed that its anticipated 380 petabyte High Performance Storage System (HPSS) is up and running to support the data-laden needs of Blue Waters.
HPSS is the result of a long-term academic and vendor (IBM) collaboration to create a hierarchical storage management and services paradigm that addresses scalability limitations that some researchers and enterprise users face in terms of capacity, file size, data rates, total objects stored and sheer number of users.
While it’s often associated with supercomputing sites, it’s also aimed at addressing the needs of highly parallel systems and workstations in enterprise settings who wish scale to the petabyte range (which it has proven) and beyond.
The center says their newly minted HPSS environment consists of multiple “automated tape libraries, dozens of high performance data movers, a large 40 GbE network, hundreds of high performance tape drives and about 100,000 tape cartridges. They point to this as an expansion of their ability to digest large research data volumes while remaining scalable as future data needs grow as it lets them keep active data closer to the compute.
Part of what made this a fit for the projects set to run on Blue Waters is the ability to move massive files between storage elements rapidly. NCSA says their implementation of the HPSS hierarchical file system software allows them to efficiently manage the access and storage of hundreds of fast petabytes while allowing them to address the lifecycle of that information by kicking the inactive data to tape where it rests until it’s needed again. While in theory, this sounds rather simple, it’s taken the teams who’ve developed it since 1992 to perfect it to the point of petabyte-level pushing.
The center’s pre-production acceptance testing revealed some notable successes. NCSA claims they demonstrated, for over 5 billion files within a single name space, constant file ingest and retrieval performance, independent of the number of files in the system.
They were also able to ingest 426 terabytes and retrieve 499 terabytes in one 24-hour period—yielding a rather impressive 38.5 per hour throughput. Center officials claim what during its initial period it maintained an average rate of 5.5 terabytes per hour.
Michelle Butler, NCSA’s senior technical program manager for storage and networking, recalls that during the more than 25 years the center has run archives and near-line systems “It took us 19 years to reach our first petabyte and an additional year to accumulate the second petabyte. For the Blue Waters system, we had our first petabyte in just two weeks.”
HPSS is a longstanding effort led by IBM and select Department of Energy Labs (LBNL, LLNL, ORNL, Argonne and Sandia), although with broader university and supercomputing center support. When the teams first set about looking for a highly scalable storage system, the impetus even then was addressing a proliferation of ever-mounting data volumes. Now, at a time when the challenges of “big data” are on everyone’s lips, the fruits of the HPSS Collaboration and resulting hierarchical storage management (HSM) archive efforts are readier than ever to pluck, especially as data management, storage and I/O challenges are more prescient.
The team behind the collaborative HPSS effort says that now they’ve proven themselves in petabyte realms, the future of exaflop ranges requires the ability to keep scaling storage in various dimensions by another factor of 1,000, especially to keep pace with the types of newer, real-time applications designed to run on massive systems. They note that they “believe the HPSS architecture and basic implementation, built around a scalable relational database management system (in this case IBM’s DB2) make it well suited to these challenges.”
The following map lends a sense of how widespread the sites are that have rolled out petabytes or more in a single HPSS file system.
According to Bill Kramer, the deputy project director on the Blue Waters project, this makes Blue Waters the “most data-focused, data-intensive system available to the U.S. science and engineering community. The center notes that the addition of this arsenal marks the world’s largest near-line data repository for open science.