August 25, 2011

The Rise of the Thinking Machine

Michael Feldman

Brain-centric computing is having a pretty good year.

This year has seen some notable advancements in computer-based brain mimicry, not just on the artificial intelligence (AI) front, but also related to in silico brain simulations.

Watson’s vanquishing of Jeopardy champions Brad Rutter and Ken Jennings in February set the stage for the year.  The now world-famous IBM super exhibited a sophisticated understanding of language semantics along with the ability to integrate that understanding into a complex analytics engine.  Since the Jeopardy match, IBM has been looking to take the technology into the commercial realm, most notably in the health care arena. 

Meanwhile projects like FACETS (Fast Analog Computing with Emergent Transient States) and SpiNNaker are working to uncover the nature of the brain at the level of the neuron.  The goal here is not to create any kind of artificial intelligence system a la Watson, but rather to simulate the neuronal network of the brain for basic science research.

SpiNNaker, a multi-year project run out of the UK at the University of Manchester, also is attempting to map the brain’s low-level biological structure and function. In June, the project received its first batch of custom-built ARM processors that will eventually power a 50 thousand-node neural network supercomputer.

The FACETS project, managed by the University of Heidelberg, actually wrapped up last year. It’s sequel, BrainScaleS project booted up in January 2011, with the idea of developing of a “brain-inspired computer architecture” based on a custom-designed neural network hardware.  BrainScaleS has links to Henry Markram’s famous Blue Brain work.

Blue Brain, based at the École Polytechnique Fédérale in Lausanne (EPFL), is perhaps the best-known of the brain mimicry projects. The idea is to perform detailed simulations of the brain at the scale of the neuronal network.  In this case though, the work was done with conventional supercomputing hardware (if you can call Blue Gene conventional). The project has successfully simulated a rat cortical column.

The follow-on to Blue Brain, also headed by Markram, is the Human Brain Project. The goal here is to move from rats to human and simulate the entire brain.

The other bookend to the Watson AI story is also from IBM. Last week, the company unveiled their cognitive computing chips.  This is basic research as well, but IBM is aiming the technology at developing thinking machines, rather than just using it to elucidate the workings of the brain.

I queried Markram about the significance to IBM’s latest chippery, who responded thusly: “This is a very important technology step. There are still many challenges ahead, but neuromorphic chips like IBM’s are bound to become key processing units in hybrid architectures of future computers.”  He also recognized the work at FACETS/BrainScaleS and SpiNNaker as contributing to this growing body of knowledge.

So what does it all mean?  For those of you who read about such development in the popular press, there has been plenty of speculation about the future of artificial brains.  A lot of this is centered around how such technology will impact the human condition, particular how intelligent computers will displace human labor.

The big question is if such technology will ultimately benefit people or merely make them superfluous.  Edward Tenner,  a historian of technology and culture with a Ph.d in European history, believes it will be the former.  From a piece he penned in The Atlantic:

Will people be obsolete? I doubt it. The economic theory of comparative advantage explains why. Assuming there will still be people, even if the computers are running everything, it will pay for them to let people do what they are relatively better at. There’s likely to be a higher opportunity cost for computers to do more intuitive analysis for which human brain-body system has evolved and concentrate on tasks at which their abilities are an even high-multiple than people’s. In the case of computers and people, as I suggested about IBM’s Watson and Jeopardy! there will always be elements of tacit knowledge and common sense that will be extremely expensive to achieve electronically.

His premise is that it will always be cheaper and more effective to have a real live human provide answers that involve intuition.  “So even if, for example, computers surpass physicians on diagnostic reasoning,” he writes, “it will be cheaper, more effective, and safer to have their judgment double-checked by a real doctor.

Maybe.  But I think one of the article’s commenters nailed it pretty well when he suggests that the real question is not whether computers will replace all labor, but how many jobs will be displaced by intelligent machines and how that impacts our traditional economic model.  He writes:

In classical economics, employers furnish the capital, and workers produce raw materials and finished goods or services.  There is tension between worker and management: both need each other, but both want a bigger piece of the profits from work; each has a strong bargaining position, and the compromise they reach determines wages and benefits.  But what’s playing out on the world stage isn’t classical economics at all.  With every passing year, owners of capital are relying less on workers and more on machines.  The balance has shifted in favor of owners of capital.

We don’t have to wait for the future to see this play out.  It’s been happening for decades, as businesses large and small have adopted IT.  The commenter notes that multinational tech manufacture Foxconn will be shedding a million of its million and half workers manufacturing circuit boards over the next two years, thanks to assembly line robotics.

We’ve certainly seen similar downsizing across the manufacturing sector in general. A century ago, the same process happened in agriculture, a sector whose labor base continues to decline.  It’s not that the industries are shrinking, just their labor force.

With the introduction of more sophisticated computing,  machines are moving higher up the food chain. For example, over the last three decades at JP Morgan, profitability has risen by a factor of 30, but employee head count has only doubled. That’s directly attributable to computer technology raising productivity.

The advent of really intelligent machines like Watson and its neuromorphic brethren will accelerate all this, in ways we can only imagine.  Even industries that are enjoying relatively rapid job growth today, like professional services, education, and health care, will eventually be impacted.

From my perspective, the key problem is that our social and economic systems are not ready for this.  While everyone is fixated on globalization, I think that’s a side show compared to what will happen — and is happening — as intelligent technology displaces human labor worldwide.

It’s not just that people who have invested years of specialized training will find their jobs threatened.  As the commenter noted above, the balance between capital and labor is shifting rapidly in favor of capital as the labor force is squeezed into fewer and fewer jobs that resist automation.  The hope is that other industries will emerge to engage the masses again, as happened after the agricultural and industrial revolutions.  But this time may be different.

Share This