Among the raft of announcements launched in tandem with SC last week, there was one that stood out for being one of the largest deployed Lustre installations at 28 petabytes total. San Diego-based Aeon Computing won the contract to provide two site-wide Lustre file systems to Los Alamos National Laboratory (LANL) in support of the lab’s national security mission. This is one of the monolithic storage systems that will support the Tri-Laboratory Commodity Technology System 1 (CTS-1), which Penguin Computing is providing (yes, the interconnect is still in a bake-off).
Aeon provided the lab with two 14 petabyte file systems based on Lustre and OpenZFS for both sides of the security wall — one for facility-wide open research computing and the other for classified computing missions. Each Lustre file system contains 40 Lustre OSS nodes, capable of 4 gigabytes per second of sustained data performance for a total of 160 gigabytes per second of parallel data access performance over single-rail FDR14 InfiniBand.
According to Aeon, “the file systems are powered by enterprise-grade technology, including LSI/Avago 12G SAS (serial attached SCSI), Mellanox FDR14 InfiniBand, HGST 12G Enterprise SAS disk drives, SanDisk 12G SAS SSDs, and Intel server technologies.”
Each file system employs 20 of Aeon’s Lustre Scalable Units, which are comprised of two Lustre OSS nodes and 120 6 terabyte 12G SAS disk drives employing OpenZFS with raidz2 data parity protection. Additional resiliency is provided by multipath and high-availability failover connectivity, intended to eliminate single points of failure. The file systems are plugged in to site-wide monitoring infrastructure that obviates the need for cumbersome or closed vendor APIs.
Jeff Johnson, co-founder of Aeon Computing, said that Aeon embraces an open hardware approach that is “extremely high-performance, reliable, and absent of what I’d call the vendor proprietary obstruction-ware layer.” He’s referring to the tendency for big appliance vendors to “try to create some value-add by putting in some black box appliance layer that can impact ease-of-use.”
“Our system is completely open,” he went on. “It’s a very high-performance application–specifically designed to be easy-to-integrate and work with,” he added.
As far as standing up such a large system, Johnson said the most significant challenge was the time to delivery. “Between the time we got the order and time we had to deliver by the end of the year was a very short time span, requiring all hands on deck.”
“Various vendors have their good days and bad days we and we got to experience all of them, but it’s in production and paid for,” he said. “The system was not only delivered on time, it came in at 60 percent greater capacity than the RFP and double the performance for the winning cost.”
The list of other bidders hasn’t been released yet, but Aeon likely had to face off against a formidable array of Lustre vendors, such as DDN, Seagate, Netapp and HP, among others.
The systems will support some of the lab’s existing compute infrastructure, but its primary role is to support the Penguin CTS-1 system, which is coming online in the spring. Interestingly Johnson explained that despite the push for labs to coalesce around a unified compute architecture, there is somewhat more freedom on the storage side.
“We’re currently talking to the other CTS-1 labs,” he said. “The storage side is not as rigid as the compute side for CTS-1, there’s a little variance there with each side allowed to do their own thing, although they are all kind of leaning toward this do-the-same-thing, have-the-same-environment, face-the-same-problems-as-they-arise strategy.”
This move toward uniformity was in many ways a consequence of budgetary realities that birthed a do-more-with-less ethos, but it hasn’t necessarily hurt innovation, not according to Johnson.
“In some ways it drove us to innovate this storage system because what [the labs] were looking for was removal of that ‘obstruction-ware layer.’ Under the hood, the different [vendor solutions] are similar: it’s got Xeon processors, it’s got motherboards, it’s got InfiniBand HPCs. There’s nothing in there that’s truly IP secret sauce, excepting some of the stuff DDN is doing with FPGA accelerators for their hardware RAID environment,” he said. “We just bypassed that obstruction-ware layer. The plumbing is designed from the ground up to do ZFS on Lustre with high-availability, with redundancy, and LANL was able to graft it into their existing environment without having to go through a bunch of hoops because there is nothing inside the box that they can’t touch locally or remotely. There’s nothing about it that’s a black box.”
Continuing to lay out the benefits to LANL, Johnson said: “It gave them end-to-end data protection all the way from the disk all the way through the Lustre file system back to the clients and gave them the performance they needed without having to provide a lot of extra resources for management administration of the resource. They were able to take all of their existing, custom-written LANL management procedures and everything they have and graft it into the box with zero effort.”
Aeon’s Lustre proposal was selected based on a weighted review process in which the technical specifications are separated from the pricing. Then the technical review committee goes through and weights all the responses before the pricing is unmasked. This price-blinding ensures that technological requirements are prioritized.
While cost is the primary decider for commodity-type systems, a technology like Lustre involves a very complex software environment, Johnson shared, where knowing how to get the I/O, the performance and the reliability are all brought to the forefront of the procurement process.
“We were targeting an open solution that would utilize our Tri-Lab Operating System TOSS with Lustre, and provide a great performance to cost ratio,” said Kyle Lamb, Infrastructure Team Lead in the High Performance Computing Division at Los Alamos National Laboratory. “Utilizing commodity hardware and OpenZFS for RAID provides a cost-effective high performance solution with the added benefit of compression to increase available usable capacity. This allows us to provide the high density performance required for our existing clusters as well as our future Commodity Technology Systems.”