Discussion around how to create, or whether to create, or if it is even possible to create general artificial intelligence has simmered for years. Sticking to the how-to element, a wide variety of schemes and technologies have been and are being explored. Recently a paper from a prominent researcher at the National Institute of Standards and Technology suggests that an optoelectronic strategy is the most likely approach to succeed in creating general AI.
Jeffrey Shainline, a scientist in the quantum nanophotonics group at NIST, argues in a perspective published last month in Applied Physics Letters, “It is the perspective of our group at NIST that hardware incorporating light for communication between electronic computational elements combined in an architecture of networked optoelectronic spiking neurons may provide potential for AGI at the scale of the human brain.”
General AI – sometimes called strong AI, full AI, or general intelligent action – is broadly used for the idea of a machine possessing sentience, self-awareness, and consciousness. Weak or narrow AI is typically used to describe more limited capabilities. (Today’s world, of course, is awash in technical and marketing buzz phrases incorporating ‘AI’.)
Leaving aside the “soft side” of AI, Shainline tackles the problem of scaling the necessary hardware infrastructure in terms of computation, networking, and memory. Leaning on brain-inspired spiking neural approaches and he gets into the weeds a bit.
Calling the effort “more akin to the construction of a fusion reactor or particle accelerator than a microchip,” Shainline wrote, “While there is much to be gained from artificial intelligence (AI) hardware at smaller scales, this article considers technological pathways to large cognitive systems, with tens to hundreds of billions of neurons, and communication infrastructure of commensurate complexity. Such technology will likely require many interconnected wafers, each packed densely with integrated circuits. We may refer to this field of research as neuromorphic supercomputing.”
Fascinating stuff.
Here’s his abstract and two figures from the paper:
“To design and construct hardware for general intelligence, we must consider principles of both neuroscience and very-large-scale integration. For large neural systems capable of general intelligence, the attributes of photonics for communication and electronics for computation are complementary and interdependent.
“Using light for communication enables high fan-out as well as low-latency signaling across large systems with no traffic-dependent bottlenecks. For computation, the inherent nonlinearities, high speed, and low power consumption of Josephson circuits are conducive to complex neural functions. Operation at 4 K enables the use of single-photon detectors and silicon light sources, two features that lead to efficiency and economical scalability.
“Here, I sketch a concept for optoelectronic hardware, beginning with synaptic circuits, continuing through wafer-scale integration, and extending to systems interconnected with fiber-optic tracts, potentially at the scale of the human brain and beyond.”
Pursuit of optoelectronics, using photonics for communications, is hardly new, even in mainstream computer research. Moving data has become a key bottleneck and many companies are seeking to unlock photonics’ potential. For example, see HPCwire coverage of Nvidia work on photonics, Crystal Ball Gazing at Nvidia: R&D Chief Bill Dally Talks Targets and Approach. An interesting side note is that Dally is rather dismissive of spiking neural network approaches for use in computing.
Shainline’s paper (Optoelectronic intelligence) is best read directly.
Link to paper: https://aip.scitation.org/doi/10.1063/5.0040567