What will system memory look like in five years? Good question. While Monday’s panel, Designing AI Super-Chips at the Speed of Memory, at the AI Hardware Summit, tackled several topics, the panelists also took a brief glimpse into the future. Unlike compute, storage and networking, which have all steadily gained software-defined capabilities, memory has remained tightly coupled to compute.
That will change said the group as the era of heterogeneous memory takes hold. One of the most concrete views came from panelist Charles Fan, founder and CEO of MemVerge.
“If we look at five years into the future we might see the true disaggregation between compute and memory [that] allows each to scale independently and the whole infrastructure becomes software composible. For any workload, I could decide how many cores I need, how much memory, how much storage, how much networking, and assemble a computer to run this workflow,” said Fan.
Yes, Fan has a horse in the race; MemVerge’s virtualized memory management is a key enabler of software-defined memory infrastructure. But it’s also true that the growing complexity of computer systems overall, including memory options, has driven a broad effort to expand software-defined capabilities as a way to more efficiently manage diverse resources for particular workflows.
The software-defined view of system architecture is hardly new, but the wrangling of memory into it is. Key enablers such as appropriate interconnect technology, less expensive but higher-performing memory, and virtualized memory management software have only recently begun appearing. All of yesterday’s panelists – Fan of MemVerge, Steve Scargall of Intel, and Brandon Wang of Synopsys – expect significant changes in the memory technologies being deployed and in the way memory is incorporated into systems. Frank Barry of MemVerge was the moderator.
The lone memory consumer on the Panel was asked how on-chip memory might change, and specifically about using MRAM in place of SRAM. “Great question,” said Wang. “Embedded memory has one major constraint not just because of the size, but also because of the process. It has to be compatible with the rest of the chip in the particular logic process used. So 6T SRAM has been and still is being used as the major embedded memory. You can’t really put in innovation of a new memory and quickly integrate into the chip seamlessly.”
“That’s why we need to look at the new innovations like MRAM on chip, but also we need to look at off chip. We say near chip, some hardware structure that you put together to leverage on chip and off chip memory in a virtual layer so that the end-user doesn’t have to distinguish the difference,” he said.
The 2021 AI Hardware Summit is a hybrid in-person plus virtual attendance event running all week in Mountain View, Calif. Panelists pre-recorded the session but took questions in real-time from a chat app. Not surprisingly much of the conversation was on participant companies’ products. Both products discussed, Intel’s Optane and MemVerge’s memory machine virtualization platforms are still young.
Intel’s High Hopes for Optane
Scargall, Optane technical specialist, made the case for Intel Optane persistent memory which has been in the market since 2019. Less expensive but also slower than DRAM, non-volatile Optane technology is much faster than traditional storage (NAND SSD/HDD/tape). It’s positioned between system memory and storage. Micron, of course, was a co-developer of the technology (3D XPoint) but exited the market last spring, a move that raised some questions about the strength of the market.
“We’re in the second generation now with the third generation expected next year. Looking at capacities of the first two generations, we offer the 128-, 256-, and 512-gig modules. The current generation of product comes in form factors where it looks like a DDR DIMM, just has a big heat spreader on there. So they install into the same DIMM slots alongside the DDR [and] we support numerous combinations of DDR and persistent memory. That gives the flexibility for not only matching the capacity that you need for both DRAM and PMEM [Persistent Memory],” said Scargall.
The first generation of Optane used with Intel’s Cascade Lake Xeon CPUs supported up to 4.5 terabytes of memory. “That’s DRAM and PMEM, combined, per socket,” said Scargall. “Intel Ice Lake CPUs that are out this year, support up to six terabytes of memory per socket. As you add more sockets, you add more memory and can scale linearly with your requirements.”
Talking about EDA chip design and overcoming the so-called memory wall (disparity of speed between CPU and memory outside the CPU chip), Scargall said “The efficiency of the chip design itself can be improved by having these bigger memory systems and more tiers of memory. This opens up new opportunities that aren’t available with existing traditional design. More memory allows the designers and engineers to be more efficient, either creating these bigger models, or being able to load multiple models at the same time, and then switch between them without having to close the file they are working on.”
Talking about Optane’s expected market traction, Scargall cited report by Coughlin Associates (Emerging Memories Take Off) which forecast that PMEM would ship more capacity than DRAM by 2028 and that in the 2030-2031 timeframe Optane will have penetrate 50 percent of all servers. Those seem, perhaps, optimistic given how hard forecasting has been of late.
Scargall said, “It’s looking more and more like DRAM is going to become a new, last level cache versus the predominant data tier.”
MemVerge and the Memory Machine
MemVerge, of course, is also betting big on Optane. The MemVerge software platform “virtualizes DRAM and Persistent Memory so that data can be accessed, tiered, scaled, and protected in-memory.” The Memory Machine is the virtualization engine and is an early version of software-defined memory management, in this case PMEM and DRAM. Once DRAM and PMEM are virtualized, the PMEM appears as DRAM allowing any application to plug-and-play with the pool of memory.
The first product was released in 2017 and won HPCwire Readers’ and Editors’ Choices Awards in 2020 for Top Five New Products or Technologies to Watch. A key functionality highlighted by Fan was MemVerge’s in-memory snapshot capability.
“What it allows the chip designer to do is to take an in-memory snapshot of their running EDA processes, whether you’re doing simulation, optimization, or verification, and these tend to be long running jobs. Traditionally, you would like to checkpoint them to storage so that if there’s any reason you need to rollback or if your system went offline or crashed, you can restart your simulation not from the beginning – which could take days or even weeks – but restart from that checkpoint.”
But checkpointing is becoming more time-consuming as memories become bigger. Copying that amount of state data to storage can take many minutes and must be done every hour or so. “By taking in memory snapshots, we could take checkpointing [data], write it to memory, and have it persist on Optane memory without moving it to storage,” Fan said, adding this can cut the checkpointing time from three weeks to two.
The in-memory snapshot capability also facilitates bursting jobs to the cloud or moving jobs within a cloud or from one cloud to another. “The reason is many of these applications are stateful applications, and using our snapshot technology as a foundation, we can create an application encapsulation, to capture, at a point in time, everything needed for this stateful application to be restarted anywhere, anytime,” Fan said.
Currently there are limits to the cloud-hopping feature. “This use case is what we are still working on. From a technical perspective, there are some considerations. One is today, if you’re moving a snapshot, from one cloud instance to another, either in the same cloud service provider or to a different cloud service provider, we do require the [transfer] to be Intel to Intel or AMD to AMD. We do not support across CPU vendors yet on those movements. Secondly, the resources on the destination needs to be at least as big as other source. So, for, example, if you have 128 gigabytes of memory on the instance you’re moving from, your destination cannot have 64 gigabytes of memory.”
Synopsys Tackles Memory and Compute Challenges
Wang recalled that when Synopsys started in 1986 with what was then called a design compiler, transistor counts per chip were tiny compared today. Well below a million per chip. Today Nvidia’s A100 GPU has 54 billion transistors and Cerebras Systems offers a wafer device with more than 2.6 trillion transistors fabbed on TSMC’s 7nm process. How times have changed.
Currently, the system-on-a-chip (SOC) is the prevailing approach, and designers typically seek tradeoffs among three criteria, said Wang. Those are quality of result (QoR), cost of result (CoR), and time to market (TTM). Meeting these criteria and dealing with chip complexity requires heavy use of simulation, modeling, verification, and optimization, he said.
The memory wall is the biggest challenge in handling today’s larger, complex designs he said, “The bandwidth or throughput of this IO interface really becomes a choke point in designing a very large chip.”
Newer AI-slanted chips have presented another challenge, said Wang. “Basically, AI brought back the topic of software-defined hardware, because there’s no single generic AI architecture, unlike CPUs, GPUs, and memories. What that really means is that you need to personalize a chip design in the same short duration that people demand when you’re doing a standard product.”
It’s turned out that bursting to the cloud when internal resources are constrained and leveraging the flexibility of resources available in the cloud have been extremely valuable in chip design, said Wang. He was asked what AI chip design will look like in five years
Marveling at the size of current chips being fabbed on 7nm processes, Wang said, “We expect [transistor counts] to be doubled in 5nm or 3nm moving forward. Besides that, AI is also looking at heterogeneous integration because different modules could be made in different process nodes and integrated in a single package. [You’ll be] dealing with even more massive data [and] they could be even in different formats. We’ll see the complexity of those designs going up. Also, people are looking for more customized solutions to do exactly what the application is asking for instead of running in a generic machine.”