The field of artificial intelligence has had a rocky history with numerous setbacks, but there have been high points too, like when IBM’s Deep Blue beat reigning chess champion, Garry Kasparov, in 1997, or when another IBM machine, Watson, proved its mettle on the popular quiz show Jeopardy, in 2011. Now machine learning, and its sibling deep learning, are becoming ubiquitous, propelled by innovative algorithms and the continued democratization of computing power.
Hearkening back to the historic Deep Blue match, a new kind of learning machine has again raised the stakes for AI by refining earlier brute force techniques with modern methods of deep learning, based on neural networks. As reported in MIT Technology Review, this is the first time that a machine has played chess by evaluating the board using something akin to human intuition.
As processor technology improves, game playing engines get faster, but their modus operandi remains much the same, employing a combination of brute force to determine every possible move combined with clever coding that makes extensive use of gameplay knowledge. A human with an unlimited amount of (life)time(s) and enough ink and paper could conceivably do the same thing. But consider that Deep Blue was able to assess about 200 million positions per second compared to about three-to-five positions per second for Kasparov, yet their level of game play was essentially the same. The human sophistication is what enables this far greater efficiency, and this quality is what AI aspires to capture.
As described by its creator, Matthew Lai, a Master’s student at Imperial College London, the new chess engine, called Giraffe, “uses self-play to discover all its domain-specific knowledge, with minimal hand-crafted knowledge given by the programmer.” The machine is trained using a neural network, so called because it is modeled on the human brain. In Lai’s view, artificial neural networks act as a substitute for human ”intuition” and have the potential to be much more efficient than pure brute force approaches.
The training takes about 72 hours on a machine with 2×10-core Intel Xeon E5-2660 v2 CPU. Training, which relies on actual game data, is fully parallelized with linear speedup up to 20 threads. Once trained, the computerized chess program plays as well as current state-of-the-art chess engines, which contain thousands of lines of hand-crafted and tuned pattern recognizers.
Lai’s engine departs from others in these key places:
• Statically evaluating positions — estimating how good a position is without looking further.
• Deciding which branches are most “interesting” in any given position, and should be searched further, as well as which branches to discard.
• Ordering moves — determining which moves to search before others, which significantly affects efficiency of searches.
“Our aim is to create a system that can perform high level feature extraction automatically, in order to not be constrained by human creativity — something that was not practical in the 1990s due to computational power constraints,” says Lai in his paper describing the project.
Lai further explains that previous attempts used machine learning only to perform parameter tuning on hand-crafted evaluation functions, whereas Giraffe’s learning system also performs automatic feature extraction and pattern recognition.
With its enhancements, Giraffe plays chess at about the level of an FIDE International Master (top 2.2 percent of tournament chess players with an official rating). This makes it the most successful attempt thus far at using end-to-end machine learning to play chess, according to its creator.
“Unlike most chess engines in existence today, Giraffe derives its playing strength not from being able to see very far ahead, but from being able to evaluate tricky positions accurately, and understanding complicated positional concepts that are intuitive to humans, but have been elusive to chess engines for a long time. This is especially important in the opening and end game phases, where it plays exceptionally well.”