An increasingly watched benchmark is the IO500 which measures storage system performance with an updated IO500 list released twice a year in concert with ISC and SC, usually at BoFs at those events. At ISC21 last week, Pengchen Laboratory took top honors in both the broad IO500 category and the 10-client node category, using the MadFS (file system). Intel’s DAOS (Distributed Asynchronous Object Storage) took the second overall in both categories, demonstrated the highest bandwidth in 10-node category, and placed several systems the top ten.
WekaIO, which has battled Intel in recent years, was the third best performing file system in the IO500 and Lustre also performed well. (Link to video on MadFS)
Reading the IO500 results is best done with care. Bandwidth and metadata handling are the key metrics across a variety of tests. Moreover, submissions from prior years are included on the list in their rank order so the list is a kind of composite. Also note, the IO500 lists the top score for a given system (one score per system). The 10-node class does the same. The full list includes all of the top scores of a single system (so multiple scores from one system). It can be a little confusing (IO500’s description of lists is at the end of this article.)
Here are snapshots from IO500 top scorers and 10-node results:
IO500 actually issues six awards including for bandwidth, metadata, and overall score in both the full list and the ten-node categories. Pengchen swept all of them except the top bandwidth award in the 10-node category which went to Intel. Pengchen’s Cloudbrain-II systems, based on Huawei’s Ascend AI technology, scored nearly 20x higher than the closest competitor overall. Intel’s top two systems (Endeavor and Wolf (from SC20)) had roughly similar performances. It’s best to look closely at system scores and configurations.
During the BoF, organizers noted some of the challenges they face, such as whether it’s worth trying to distinguish between production systems and those ostensibly set up just for the test. It’s clear the benchmark is evolving. Best to get the details of the tests and how the total scores are tallied directly from IO500.
“The list is growing nicely over the last four years [and] the number of institutions is growing as well. So it’s not just the same people submitting continuously,” said Dean Hildebrand, technical director, office of the CTO, Google, and one of several presenters at the BoF. “The other thing is that while the number of submissions gone, has gone down a little bit, it’s really stabilizing at this point, and I think that, that’s great, and if we can keep that up, I think we’ll be able to grow this list really nicely.”
It’s worth noting the nascent stage of roughly four-year-old IO500. The organization is still evolving. Last spring IO500 formally became a non-profit organization. Co-founder John Bent stepped down from the board and was replaced by Hildebrand. Other changes included moving IO500 assets from the Virtual Institute for IO (VI4IO) into the new corporate entity including the website. “While there were a few hiccups with this transition for the SC20 IO500 list, this will offer a stronger, independent foundation for IO500 going forward,” reported IO500.
The current IO500 Steering Committee includes: Andreas Dilger, Whamcloud/DDN; Hildebrand, Google; Julian Kunkel, University of Reading; Jay Lofstead, Sandia National Laboratories; and George Markomanolis, CSC – IT For Science Ltd. All participated in the BOF.
Among other changes, such as formalizing the twice year results release schedule, IO500 is testing expanded, automated metadata collection about the storage systems being tested. “This will make the details about the systems more complete and consistent, which allows better insights into the configurations that achieved the results, and improves the ability to make comparisons between systems at different scales (e.g. bandwidth per server),” reported the organization.
Link to IO500 Lists: https://io500.org/releases
Link to video on MadFS: https://www.youtube.com/watch?v=BJpkpA6hsDc&list=PLN0VUBsF9Di0Bsj4qia5SCqzBtTzGciA6&index=3
IO500 Explanation of Lists From its Website
We publish multiple lists for each BoF at SC and ISC as well as maintaining the current most up-to-date lists. We intend to not modify a list after the release date but in exceptional circumstances. However, we allow to improve and clarify list metadata upon the request of the submitters. We publish a Historic List of all submissions received and multiple filtered lists from the historic list. We maintain a Full List which is the subset of submissions which were valid according to the set of list-specific rules in place at the time of the list’s publication.
Our primary lists are Ranked Lists which show only opted-in submissions from the Full List and only the best submission per storage system. We have two ranked lists: the IO500 List for submissions which ran on any number of client nodes and the 10 Node Challenge list for only those submissions which ran on exactly ten client nodes.
In summary, for each BoF, we have the following lists:
- Historic list: all submissions ever received
- Full list: the subset of the Historic list of submissions that are currently valid
- IO500 List: the subset of the Full list of submissions marked for inclusion in the IO500 ranked list, showing only one highest-scoring result per storage system
- 10-Node Challenge List: the subset from the Full list of submissions run on exactly ten nodes and marked for inclusion in the 10-Node Challenge ranked list, showing only one highest-scoring result per storage system
Please note that the Ranked Lists only show the best submission for each storage systems, so if a storage system has multiple submissions only the one with the highest overall score is shown in the Ranked Lists. All submissions will appear in the Full and Historical lists. However, please note that at the semi-annual BOFs we present the IO500 Bandwidth and IO500 Metadata awards based on the highest bandwidth and metadata scores. In some cases, the highest bandwidth and metadata scores are on submissions for that do not have the highest overall score and are only visible in the Full List.