“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energetic effort,” IBM Research wrote in a blog post. “Therefore, the development of novel hardware based on radically new processing paradigms is crucial for AI research to progress.” So, at the 2019 IEEE International Electron Devices Meeting (IEDM) in San Francisco, IBM Research unveiled a series of AI hardware breakthroughs via papers in several critical fields.
Nanosheet technology
“Over the last five decades, semiconductor technology has been the engine for computing hardware,” IBM wrote. “[FinFET] technology continues to scale with ever-demanding requirements in density, power and performance, but not fast enough[.]” Stacked Gate-All-Around (GAA) nanosheets are IBM’s answer as the demands of AI exceed the capabilities of FinFET semiconductor architectures. “Nanosheet” was only coined in 2015, and now IBM Research is highlighting three papers in the field of nanosheet technology. These included new techniques for enabling nanosheet stacking and multiple-voltage cells, as well as a new fabrication method. IBM hopes that GAA nanosheets will offer “more computing performance and less power consumption” while also allowing for more varied and streamlined designs, enabling more versatile device design.
Phase-change memory
IBM Research also highlighted a series of papers around phase-change memory (PCM), which “still poses major challenges,” including susceptibility to noise, resistance drift and reliability concerns. The papers showcased work from IBM researchers to develop new devices, algorithmic and structural solutions and a new model training technique to help address these issues, improving stability and reliability. Other researchers introduced a new neuro-inspired silicon-integrated prototype chip design for PCM.
Electro-chemical random-access memory
Finally, IBM elaborated on its efforts to accelerate deep learning with new memory devices that are created using preexisting materials found in semiconductor factories. The resulting electro-chemical random-access memory, or ECRAM, “demonstrates sub-microsecond programming speed, a high conductance change linearity and symmetry, and a 2×2 array configuration without access selectors.” The ECRAM, which is CMOS-compatible, was tested on a linear regression common to training of deep neural networks. In tandem, IBM Research highlighted new algorithms to improve the accuracy of predictive AI.
“The achievements in these papers,” IBM wrote in an email to HPCwire, “address a critical issue in AI advances: making hardware systems more efficient to keep pace with the demand of AI software and data workloads.”