Visit additional Tabor Communication Publications
August 25, 2011
This year has seen some notable advancements in computer-based brain mimicry, not just on the artificial intelligence (AI) front, but also related to in silico brain simulations.
Watson's vanquishing of Jeopardy champions Brad Rutter and Ken Jennings in February set the stage for the year. The now world-famous IBM super exhibited a sophisticated understanding of language semantics along with the ability to integrate that understanding into a complex analytics engine. Since the Jeopardy match, IBM has been looking to take the technology into the commercial realm, most notably in the health care arena.
Meanwhile projects like FACETS (Fast Analog Computing with Emergent Transient States) and SpiNNaker are working to uncover the nature of the brain at the level of the neuron. The goal here is not to create any kind of artificial intelligence system a la Watson, but rather to simulate the neuronal network of the brain for basic science research.
SpiNNaker, a multi-year project run out of the UK at the University of Manchester, also is attempting to map the brain's low-level biological structure and function. In June, the project received its first batch of custom-built ARM processors that will eventually power a 50 thousand-node neural network supercomputer.
The FACETS project, managed by the University of Heidelberg, actually wrapped up last year. It's sequel, BrainScaleS project booted up in January 2011, with the idea of developing of a "brain-inspired computer architecture" based on a custom-designed neural network hardware. BrainScaleS has links to Henry Markram's famous Blue Brain work.
Blue Brain, based at the École Polytechnique Fédérale in Lausanne (EPFL), is perhaps the best-known of the brain mimicry projects. The idea is to perform detailed simulations of the brain at the scale of the neuronal network. In this case though, the work was done with conventional supercomputing hardware (if you can call Blue Gene conventional). The project has successfully simulated a rat cortical column.
The follow-on to Blue Brain, also headed by Markram, is the Human Brain Project. The goal here is to move from rats to human and simulate the entire brain.
The other bookend to the Watson AI story is also from IBM. Last week, the company unveiled their cognitive computing chips. This is basic research as well, but IBM is aiming the technology at developing thinking machines, rather than just using it to elucidate the workings of the brain.
I queried Markram about the significance to IBM's latest chippery, who responded thusly: "This is a very important technology step. There are still many challenges ahead, but neuromorphic chips like IBM's are bound to become key processing units in hybrid architectures of future computers." He also recognized the work at FACETS/BrainScaleS and SpiNNaker as contributing to this growing body of knowledge.
So what does it all mean? For those of you who read about such development in the popular press, there has been plenty of speculation about the future of artificial brains. A lot of this is centered around how such technology will impact the human condition, particular how intelligent computers will displace human labor.
The big question is if such technology will ultimately benefit people or merely make them superfluous. Edward Tenner, a historian of technology and culture with a Ph.d in European history, believes it will be the former. From a piece he penned in The Atlantic:
Will people be obsolete? I doubt it. The economic theory of comparative advantage explains why. Assuming there will still be people, even if the computers are running everything, it will pay for them to let people do what they are relatively better at. There's likely to be a higher opportunity cost for computers to do more intuitive analysis for which human brain-body system has evolved and concentrate on tasks at which their abilities are an even high-multiple than people's. In the case of computers and people, as I suggested about IBM's Watson and Jeopardy! there will always be elements of tacit knowledge and common sense that will be extremely expensive to achieve electronically.
His premise is that it will always be cheaper and more effective to have a real live human provide answers that involve intuition. "So even if, for example, computers surpass physicians on diagnostic reasoning," he writes, "it will be cheaper, more effective, and safer to have their judgment double-checked by a real doctor.
Maybe. But I think one of the article's commenters nailed it pretty well when he suggests that the real question is not whether computers will replace all labor, but how many jobs will be displaced by intelligent machines and how that impacts our traditional economic model. He writes:
In classical economics, employers furnish the capital, and workers produce raw materials and finished goods or services. There is tension between worker and management: both need each other, but both want a bigger piece of the profits from work; each has a strong bargaining position, and the compromise they reach determines wages and benefits. But what's playing out on the world stage isn't classical economics at all. With every passing year, owners of capital are relying less on workers and more on machines. The balance has shifted in favor of owners of capital.
We don't have to wait for the future to see this play out. It's been happening for decades, as businesses large and small have adopted IT. The commenter notes that multinational tech manufacture Foxconn will be shedding a million of its million and half workers manufacturing circuit boards over the next two years, thanks to assembly line robotics.
We've certainly seen similar downsizing across the manufacturing sector in general. A century ago, the same process happened in agriculture, a sector whose labor base continues to decline. It's not that the industries are shrinking, just their labor force.
With the introduction of more sophisticated computing, machines are moving higher up the food chain. For example, over the last three decades at JP Morgan, profitability has risen by a factor of 30, but employee head count has only doubled. That's directly attributable to computer technology raising productivity.
The advent of really intelligent machines like Watson and its neuromorphic brethren will accelerate all this, in ways we can only imagine. Even industries that are enjoying relatively rapid job growth today, like professional services, education, and health care, will eventually be impacted.
From my perspective, the key problem is that our social and economic systems are not ready for this. While everyone is fixated on globalization, I think that's a side show compared to what will happen -- and is happening -- as intelligent technology displaces human labor worldwide.
It's not just that people who have invested years of specialized training will find their jobs threatened. As the commenter noted above, the balance between capital and labor is shifting rapidly in favor of capital as the labor force is squeezed into fewer and fewer jobs that resist automation. The hope is that other industries will emerge to engage the masses again, as happened after the agricultural and industrial revolutions. But this time may be different.
Posted by Michael Feldman - August 25, 2011 @ 7:29 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.