August 27, 2012

Benchmarking Computer Intelligence

Robert Gelber

Earlier this summer, the UK’s University of Reading held a competition in memory of Alan Turing. Known as the father of artificial intelligence, Turing estimated that computers would be able to hold conversations with humans to a certain degree by the year 2000. More than a decade into the new millennium, no system has successfully passed Turing’s test of computer intelligence. That may change in the near future as one team nearly accomplished the feat back in June. The Telegraph posted an article about the near success.

Alan Turing believed if a computer could pass for human in conversation, it would be defined as intelligent. His test, which was originally titled “the imitation game,” had a five-minute time limit and required the system to fool at least 30 percent of the humans it spoke with.

During the university’s “Turing test marathon,” a program came very close to passing the mathematician’s test. Named “Eugene,” the application emulated a 13 year-old boy via a chat interface and fooled 29.2 percent of the humans it interacted with.

Programs like Eugene require advanced language processing abilities. Without this functionality, applications would be unable to figure out the context of a message and create an appropriate response. It’s this same capability that vaulted IBM’s Watson supercomputer to popularity.

Watson famously exhibited an ability to understand words in context when competing on Jeopardy. Combine that functionality with a massive knowledge base and the system made quick work of all-time Jeopardy champ Ken Jennings.

Of course, IBM didn’t spend all that time and capital to create a game show killer. Watson has found a home in the medical industry, helping professionals at Sloan Kettering, WellPoint and other institutions. As the system ingests more medical data, its machine learning algorithms are expected to become more accurate at delivering patient care.

Watson’s style of learning is similar to that of humans: as people receives new input; they typically store it in memory and learn to react appropriately in the future. In this case though, Watson’s users understand they are communicating with a computer, which automatically disqualifies it from the Turing test.

In the end, for a computer to pass Turing’s test, it doesn’t need to encapsulate human understanding and intelligence, just simulate it — or to put it another way, to become an efficient liar.  Of course, that may not be so different from being human after all.


Full story at The Telegraph