A research team at IBM’s Almaden, California research lab has developed a disk drive array that can store 120 petabytes of data. At that capacity, the system can hold about a trillion average-sized files, providing enough storage for the most demanding supercomputing simulations.
According to a recent article in MIT’s Technology Review, the system was developed for an unnamed customer that requires petascale simulations, but the technology could also apply to conventional ultra-scale storage systems. In particular, the 120-petabyte array could be a run-of-the mill storage setup for cloud computing systems of the future — that according to Bruce Hillsberg, director of storage research at IBM and leader of the petabyte storage project.
The storage array is made up of 200,000 conventional hard disk drives and are stored in an extra-dense and extra-wide storage drawer. As is the case for a lot of IBM’s cutting-edge supercomputing technology, the components are water-cooled rather than air-cooled.
Besides the challenge of getting so many disks into reasonably sized system, there was the more tricky problem of disk failure. With hundreds of thousands of drives involved, failures have to be treated as a fundamental property of the system. IBM uses the standard approach of striping copies of data on different disks, but employs software that enables storage performance to be maintained at high levels when the hardware breaks. According to Hillsberg, the system is designed to be robust enough not to lose any data for a million years and “without making any compromises on performance.”
The Technology Review piece points out the system capabilities have leveraged recent enhancement’s to IBM’s General Parallel File System (GPFS), that the company demonstrated in July. In that case, the file system was able to scan 10 billion files in 43 minutes, which according to the IBM’ers was 37 times faster than 2007-era GPFS.
Presumably we will find out who IBM’s unnamed customer is when the 120-petabyte system is deployed.