A university-industry research team is reporting a performance advance for neural networks with the development of a chip with potential applications for image recognition in autonomous vehicles and robots.
The chip design relies on in-memory processing and the replacement of standard transistors with capacitors used to store electrical charges. That configuration helped reduce data movement, boosting the performance of the neural network accelerator.
Researchers at Princeton University teamed with chipmaker Analog Devices Inc. to fabricate the neural chip, which they claimed can outperform current neural network devices, according to recent testing.
Among the attributes of the mixed signal convolutional neural network accelerator is the integration of storage and processing schemes designed to reduce data movement, the researchers reported in a paper published by the IEEE.
The researchers note that the performance of neural network accelerators is slowed by a communication bottleneck when combining, for example, computer vision inputs with “neuron weights.” The latter refers to the strength of connections among interconnected units in a neural network. The Princeton engineers used a binarized neural network, that is, neural networks with binary weights and input activations to achieve neuron weight storage and processing on a 65-nanometer mixed signal accelerator slightly larger than a 6-transistor SRAM cell.
The tradeoff for chip designers in using more efficient in-memory processing to crunch data where it is stored is susceptibility to signal-to-noise problems. That’s because so much information is crammed into those signals. For example, fluctuations in voltages in currents can corrupt information processed in-memory, resulting in errors.
“Computation signal-to-noise-ratio has been the main barrier for achieving all the benefits in-memory computing can offer,” said co-author Naveen Verma, an associate professor of electrical engineering at Princeton.
The work-around was using capacitors instead of transistors to handle in-memory processing. Capacitors are used to store an electrical charge. Among their advantages are that chipmakers like Analog Devices, Norwood, Mass., have perfected fabrication techniques to yield precise devices that are largely immune to changes in voltage or temperature.
Capacitors are also very small. Hence, the Princeton researchers were able to place their in-memory processor on top of memory cells, further improving data communications by placing capacitors inside memory devices. The result, they claimed, is faster, lower-power processing.
Among the image recognition problems used to test the neural network accelerator chip were deciphering handwriting samples and parsing street-view home addresses. Another was using the neural network to recognize common objects ranging from cats and dogs to cars and aircraft.
Further benchmark testing included measuring the number of computations the chip could perform in one second, roughly the time required for a consumer device to answer a query. The Princeton researchers said their chip performed 9.4 trillion binary operations per second.
The next steps include making the neural network chip programmable and compatible with existing hardware. Also needed are new algorithms and other software required for AI developers to come up with new applications that harness the in-memory approach. Those applications could include “energy- and latency-constrained” platforms used in autonomous systems or Internet of Things sensors.
“The chip’s major drawback is that it uses a very disruptive architecture,” Verma said. “That needs to be reconciled with the massive amount of infrastructure and design methodology we have and use today.”
Feature image source: Princeton University