Last April Intel released its Optane Data Center Persistent Memory Module (DCPMM) – byte addressable nonvolatile memory – to increase main memory capacity and provide performance closer to DRAM speeds. Notably, the technology is stirring interest among system makers, although it is currently only compatible with certain Intel processors. Last week Japan’s National Institute of Advanced Industrial Science and Technology (AIST) posted a paper benchmarking DCPMM against DRAM performance highlighting strengths and weaknesses.
Optane, of course, is Intel’s implementation of 3D XPoint media originally developed jointly with Micron Technology. Broadly, 3D XPoint seeks to fill the functional gap between fast but volatile and less dense DRAM technology and less costly, non-volatile NAND Flash. Optane uses a stacked (3D) transistor-less technology to conserve space and improve performance.
According to AIST, it undertook the project because there have been only a few reports on DCPMM performance so far. While the performance gap with DRAMs is significant so are the gains relative to NAND. Here’s an excerpt from paper’s conclusion:
“In order to complement prior performance reports on Intel Optane DCPMM, we conducted experiments using our own measurement tools. We observed that the latency of random read-only access was approximately 374 ns. That of random writeback-involving access was 391 ns. The bandwidths of read-only and writeback-involving access for interleaved memory modules were approximately 38 GB/s and 3 GB/s, respectively.
“Many applications (e.g., especially large-scale HPC and AI workloads) will get benefit from a large capacity of main memory expanded by DCPMM. However, a substantial performance gap between DCPMM and DRAM poses new challenges for system software studies. We are currently conducting experiments using application programs and will report details in our future publication,” wrote Takahiro Hirofuchi and Ryousei Takano of AIST.
A clearer comparison is presented by the tables below along with the description of the system tested.
In discussing read-write latencies, Hirofuchi and Takano note that most CPU architectures perform the memory prefetching and the out-of-order execution to hide memory latencies from programs running on CPU cores. They took steps to avoid this issue.
“To measure latencies precisely, the benchmark program was carefully designed to suppress these effects. To measure the read latency of main memory, it works as follows:
- First, it allocates a certain amount of memory buffer from a target memory device. To induce LLC misses, the size of the allocated buffer should be sufficiently larger than the size of LLC. It splits the memory buffer into 64-bytes cacheline objects.
- Second, it set up the link list of the cacheline objects in a random order, i.e., traversing the linked list causes jumps to remote cacheline objects.
- Third, it measures the elapsed time for traversing all cacheline objects and calculates the average latency to fetch a cacheline. In most cases, a CPU core stalls due to an LLC miss upon the traversal of the next cacheline object in the linked list. The elapsed time of this CPU stall is a memory latency.”
It’s best to read the report, which is short, for the full details.
Link to AIST paper: https://arxiv.org/pdf/2002.06018.pdf