Exascale, once just a gleam in the eyes of a few prescient computer scientists, is beginning to take shape. That arbitrary date of 2018 for a thousand-fold increase in computing power no longer seems far fetched. But as exascale comes into focus, some very specific roadblocks are being resolved and storage is one of them.
The storage problems facing tomorrow’s theoretical exascale systems are already surfacing in today’s massively parallel, heterogeneous, multicore HPC systems. The emergence of performance-intensive HPC applications in business, government and academia, demands a new storage and I/O paradigm.
Scaling performance on traditional spinning disk storage is expensive and inefficient. In the conventional approach, the number of drive spindles is directly correlated with I/O delivery. Users are forced to buy lower capacity, more expensive drives to increase spindles without increasing capacity. The desired I/O may be achieved, but at a cost that includes: lost storage density; inability to realize the efficiencies of higher capacity HDDs; and all this results in more systems to house, power and manage.
Back in 1999, when VMware® launched the virtualization revolution by decoupling the physical server from the logical server, they created a new compute provisioning paradigm that forever changed the data center. VMware finally allowed users to run multiple jobs on a single virtualized system. This helped solve many of the problems on the compute side associated with idle capacity, inefficient use of servers and the negative economics of overprovisioning.
Much like the business and architectural transformation that resulted from VMware’s innovations, DataDirect Networks (DDN) has finally resolved the long standing challenges associated with the overprovisioning of storage by decoupling I/0 performance from capacity. The solution, know as the Infinite Memory Engine™ (IME), is a highly transactional, resilient and reliable “burst buffer cache” and I/O accelerator for HPC and Big Data applications.
IME is composed of client software resident on compute nodes, and server software for the I/O servers that aggregate and virtualize disparate compute or I/O server resident SSDs. This creates a single pool of extremely low latency, high performance, non-volatile memory-based storage to become a new fast data tier.
Not only does IME intelligently decouple storage performance from spinning disk storage capacity, it also:
- Significantly accelerates applications by moving I/O right next to compute resources to reduce latency, delivering 50% faster performance than all flash arrays
- Greatly reduces cluster idle time through intelligent, forward looking I/O provisioning
- Breaks down network bottlenecks for more efficient data center operation
- Reduces power consumption, increases data center density and lowers system cost
Typical Big Data and HPC applications addressed by IME include analytics, financial services, scientific computing and research, life sciences/genomics, oil and gas, and many more.
IME for Unparalleled Data Center Efficiency
IME brings numerous benefits to the data center.
For example, IME:
- Boosts data center efficiency by dramatically reducing hardware, power, floor space and the number of components to manage and maintain
- Provides massive application acceleration by returning wasted processing cycles to compute that were previously managing storage activities or waiting for I/O from spinning disk, greatly increasing compute ROI
- Is compute and storage hardware agnostic as this software-defined storage scales limitlessly and protects data via distributed erasure coding in this NVM fast data tier
With IME, DDN has addressed a storage problem that has been unresolved ever since the introduction of disk-based storage. IME allows data centers to run more complex simulations faster with less hardware. Large datasets can be moved out of HDD storage and into memory quickly and efficiently. Then, data can be moved back to HDD storage once processing is complete much more efficiently with unique algorithms that align small and large writes into streams,, enabling users to implement the largest, economical HDDs to hold capacity. Workload performance is optimized to reduce time to insight and discovery. Cost savings of up to 80% can be realized while achieving infinite scalability and highly efficient I/O performance.
DDN’s IME solution transforms storage from a bottleneck to becoming a major contributor to a smoothly functioning IT infrastructure that supports the organization’s most ambitious HPC and big data and performance-intensive applications.
And looking to the future, IME has taken its place as one more step on enabling the road to exascale.