Researchers from Sandia National Laboratories, Stanford University, and UMass Amherst report developing a parallel programming approach for a novel ionic floating-gate memory array that promises to overcome what’s been a persistent challenge to improving neuromorphic computing performance on artificial neural networks (ANN).
The new work – which involves breakthroughs in programming and the broader fields of organic electronics and solid-state electrochemistry – was reported in Science last week (Parallel programming of an ionic floating-gate memory array for scalable neuromorphic computing).
While there’s been plenty of work on the idea that neuromorphic computers will overcome efficiency bottlenecks inherent to conventional computing through parallel programming and read-out of artificial neural network weights in a crossbar memory array, accomplishing that goal has been difficult. The researchers note that the need for “selective and linear weight updates and <10 nanoampere read currents for learning” have restrained efficiency gains compared to conventional digital computing.
Their solution is a new device and programming approach.
“We introduce an ionic floating-gate memory (IFG) array based upon a polymer redox transistor connected to a conductive-bridge memory (CBM),” write the researchers. “Selective and linear programming of a transistor array is executed in parallel by overcoming the bridging voltage threshold of the CBMs. Synaptic weight read-out with currents <10 nanoampere is achieved by diluting the conductive polymer in an insulating channel to decrease the conductance. The redox transistors endure >1 billion ‘read-write’ operations and support >1 megahertz ‘read-write’ frequencies.”
More simply, these advances could show a practical path for neuromorphic computing to surpass traditional computer architecture efficiency in running ANNs.
“With the ability to update all of the data in a task simultaneously in a single operation, our work offers unmistakable performance and power advantages,” said Sandia researcher Elliot Fuller in an article posted on the Sandia web site. “This is projected to improve machine learning while using a fraction of the power of a standard processor and 10 times higher speed than the best digital computers.”
The use of large crossbar arrays of synaptic memory elements to execute ANN algorithms has long been alluring. However, compelling demonstrations of inference using crossbars based on a variety of synaptic devices have been lacking due to their non-ideal electrical characteristics. “Two-terminal devices, such as memristors based on phase change memory (PCM) or filament forming metal oxides (FFMO), typically exhibit a super-exponential dependence of the current on the applied voltage during ‘writes’,” write the researchers.
Recently developed redox-transistor memory has shown promise to circumvent existing memristor technology limitations: “The three-terminal redox transistor decouples the ‘write’ and ‘read’ operations using a ‘gate’ electrode to tune the conductance state through faradaic reactions involving Li+ or H+ ion injection into the channel electrode through a solid electrolyte. The insertion of cations into the bulk of the channel acts to dope the material through a gradual composition modulation that leads up to thousands of finely spaced conductance levels with near-ideal analog behavior,” they write.
But redox-transistor memory has not been demonstrated in a synaptic array. “Here, we enable parallel programming and state retention by integrating a polymer-based redox transistor and a volatile CBM to produce a non-volatile, addressable synaptic memory we call ionic floating gate memory (IFG). The three-terminal design allows the channel to be engineered for ultra-low current ‘read’ operations without sacrificing analog performance through diluting the conductive polymer in a polymeric insulator.”
The fast speed, high endurance, and low voltage that are critical for low energy computing are demonstrated in this work.