Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them

November 25, 2013

Carver Mead on Quantum Computing and Neuromorphic Design

Tiffany Trader

Computer scientist, inventor and university physicist Carver Mead is perhaps best known for coining the phrase “Moore’s law,” helping to popularize Gordon Moore’s 1965 observation that the number of transistors on an integrated circuit doubles about every 24 months. Mead was also instrumental in the prediction’s tremendous staying power.

One of Mead’s most significant contributions to computing was a technique called very large-scale integration (VLSI), which enabled tens of thousands of transistors to be fitted onto a single silicon chip. In 1979, Mead taught the world’s first VLSI design course and created the first software compilation of a silicon chip. His 1980 textbook “Introduction to VLSI Design,” coauthored by Lynn Conway, launched the Mead and Conway Revolution. Mead and his contemporaries set the stage for the “microchip revolution” in the Pacific Northwest. His methods of complex chip design have catalyzed decades of progress.

In the 1980s, Mead grew frustrated with the limits of traditional CPU design, and turned to mammalian brains for inspiration. Three decades hence, this field of neuromorphic computing is back in the spotlight with efforts like the Human Brain Project. Mead, now 79, maintains a professor emeritus position at Caltech, where he taught for over forty years. In a recent interview with MIT Technology Review, Mead details why it’s important for computer engineers to explore new forms of computing.

In Mead’s view, one of the thorniest challenges for the chip industry is power dissipation. For decades now, the focus has been on faster and faster chips, but the heat issue can’t be ignored. Mead notes that “It’s a common theme in technology evolution that what makes a group or company or field successful becomes an impediment to the next generation. … Everyone was richly rewarded for making things run faster and faster with lots of power. Going to multicore chips helped, but now we’re up to eight cores and it doesn’t look like we can go much further. People have to crash into the wall before they pay attention.”

These limitations are what prompted his interest in neuromorphic designs. “I was thinking about how you would make massively parallel systems, and the only examples we had were in the brains of animals,” he tells MIT Technology Review, “We built lots of systems. We did retinas, cochleas—a lot of things worked. A lot of my students are still working on this. But it’s a much bigger task than I had thought going in.”

Mead is also directing his energy into developing a unified framework to explain both electromagnetic and quantum systems. This is summarized in his book Collective Electrodynamics. Mead is skeptical, yet supportive, of current quantum computing projects.

“We don’t know what a new electronic device is going to be. But there’s very little quantum about transistors,” he says. “I’m not close to it, but I’m generally supportive of these people doing what they call quantum computing. People have got into trying to build real things based on quantum coupling, and any time people try to build stuff that actually works, they’re going to learn a hell of a lot. That’s where new science really comes from.”

Mead’s viewpoint is refreshing and inspirational. He reminds us that all new technologies start small before becoming “part of the infrastructure that we take for granted.” Even “the transistor was [once] a tiny little wart off a big industry,” he quips.

Share This