The high-performance computing community has started to take notice of Non-Volatile Memory Express (NVMe) technology. This communications interface and protocol was developed for SSDs by NVM Express Inc., an industry consortium of more than 80 members, including Intel, Micron, Dell and Seagate. According to its backers, NVMe allows the parallelism levels offered by SSDs to be fully utilized by the host’s hardware and software. For HPC, NVMe promises unprecedented IOPS and low latency, speeding workflows.
Silicon Mechanics reports that NVMe provides almost 3 Gbps of read bandwidth and about 500K IOPs. The same source says that “compared to SATA SSDs, NVMe offers up to 6x the performance, half the latency and double the CPU efficiency.”
So what it is NVMe?
NVM Express is a scalable host controller interface designed from the ground up for non-volatile memory. The specification defines an optimized register interface, command set and feature set for PCIe storage devices to overcome the bottlenecks of older protocols and storage buses. Support also exists for enterprise capabilities, such as end-to-end data protection, enhanced error reporting, and virtualization.
Over at the Bright Computing blog, Drew Robb highlights the features that make NVMe a technology to consider for accelerated application performance.
In-memory architectures are recent innovations that provide the highest level of performance to applications harnessed for HPC. If cost were no object, everything would be done in-memory. In order to get around this, many implementers have been able to augment processing and memory with flash, either in the form of solid state drives (SSDs) or by placing the flash beside the processor using PCIe.
PCIe does provide more bandwidth and lower latency for HPC, and PCIe Gen 3 supports about 7.87 Gbps per lane. However, one of the major roadblocks to employing PCIe across platforms is that PCIe relies on aging SATA and SAS protocols developed for relatively sluggish hard drives. As a result, big queues can develop when too much compute is involved. Unfortunately, SATA allows for only one command queue that can hold up to 32 commands per queue. NVMe, on the other hand, enables 65,536 (64K) queues each with 64K commands per queue, enabling greater productivity in a smaller timeframe.
Additional information comes from Intel’s Dave Akerson. At a recent tutorial he gave on NVMe at the OFA Developers’ Workshop (slides), he answered an audience member’s question about the value of the technology for HPC.
“From a performance standpoint, you can actually replace 20 hard disk drives with one SATA SSD today and get equivalent performance. In the case of an NVM Express PCIe drive you basically can replace five of those SSDs with one PCI Express to get the performance,” he stated. “In terms of cost, SATA SSDs are a little more expensive as far as the acquisition cost on a cost per gigabyte basis, however you can probably do more with a lower volume of SSDs than what you can do with your hard disk drives today. So you’ve got some cost savings there. Now in terms of PCI Express SSDs versus SATA SSDs, yes today you are likely paying a slight premium for that additional performance. Over time, and probably within the next 2-3 years, you’re going to see PCI Express and SATA at roughly equivalent cost parity.”
Version 1.0 of the NVM Express specification was released on March 1, 2011. NVM Express, Inc., announced the release of its 1.2 specification on November 12, 2014, adding “a new level of enterprise and client functionality.” NVMe devices are shipping now from Samsung, Intel, Silicon Mechanics, HGST and others.
The consortium is also developing specifications to bring the benefits of NVM Express to fabrics such as Ethernet, InfiniBand and Fibre Channel. An NVM Express over Fabrics standard, expected at the end of 2015, will extend NVMe “to usages with hundreds of solid state drives where using a fabric as an attach point is more appropriate than using PCI Express.” Work is progressing very quickly; Mangstor and Micron, among others, are already demoing pre-production setups with Mellanox gear.
From the Mellanox blog:
As mentioned, Mangstor is back with an upgraded NVMe Over Fabrics solution. Their NX6320 flash storage array now supports Mellanox ConnectX-4 for 100Gb Ethernet and can do 14M (million) IOPs. It’s rumored another configuration using multiple Mangstor arrays can hit 50GB/s (yes, 50 GigaBytes per second) of throughput.
Micron has a demo of NVMe Over Fabrics supporting millions of IOPs with very low latency. It also uses Mellanox ConnectX-4 adapter running 56Gb.
Mellanox’s conclusion?
Non-volatile memory (NVMEM) is getting faster. Instead of just replacing hard drives, it’s ready to displace DRAM by letting customers build servers with more NVMEM and less DRAM, increasing application performance while lowering costs.
We’ll be looking to catch up with all of these parties at SC next month so we can report back on their progress.
For the less technical or those brand new to NVMe, or if you’re looking for a handy reference for someone who meets that description, a good place to start is “NVMe for absolute beginners” by Cisco’s J Metz.
Are you using NVMe in an HPC or big data environment? Tell us what you think in the comments section or email me at [email protected].