Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and architecture professor Erno Rubik. Now, a group of UC Irvine researchers has developed a new algorithm – Autodidactic Iteration – able to solve Rubik’s Cube with no human assistance. The work is an advance in what’s called deep reinforcement learning (DRL), a form of DL that combines classic reinforcement learning, deep learning, and Monte Carlo Tree Search (MCTS).
“Our algorithm is able to solve 100% of randomly scrambled cubes while achieving a median solve length of 30 moves — less than or equal to solvers that employ human domain knowledge,” write the researchers in their paper – Solving the Rubik’s Cube Without Human Knowledge – published in May on arXiv.com. They called the resulting solver, appropriately, DeepCube.
Rubik’s Cube is a member of a class of problems whose solution has proven difficult for DRL because there are a large number of states and only one reward state. In this instance, Rubik’s Cube has a large state space, with approximately 4.3 × 1019different possible configurations. The lack of many ‘reward states’ make it difficult to develop a solving strategy.
Rubik spent a month finding the first algorithm to solve the cube. “Since then, the Rubik’s Cube has gained worldwide popularity and many human-oriented algorithms for solving it have been discovered. These algorithms are simple to memorize and teach humans how to solve the cube in a structured, step-by-step manner,” write the paper’s authors – Stephen McAleer, Alexander Shmakov, Forest Agostinell, and Pierre Baldi.
Machined learned approaches to solve the Rubik’s Cube have struggled to succeed without human help and have had to rely on hand-engineered features and group theory to systematically find solutions. Here’s a brief description how the researchers attacked the problem excerpted from the paper:
“To overcome a sparse reward in a model-based environment with a large state space, we introduce Autodidactic Iteration (ADI): an algorithm inspired by policy iteration for training a joint value and policy network. ADI trains the value function through an iterative supervised learning process. In each iteration, the inputs to the neural network are created by starting from the goal state and randomly taking actions. The targets seek to estimate the optimal value function by performing a breadth-first search from each input state and using the current network to estimate the value of each of the leaves in the tree. Updated value estimates for the root nodes are obtained by recursively backing up the values for each node using a max operator. The policy network is similarly trained by constructing targets from the move that maximizes the value. After the network is trained, it is combined with MCTS to efficiently solve the Rubik’s Cube.”
The network trained using ADI for 2,000,000 iterations. “The network witnessed approximately 8 billion cubes, including repeats, and it trained for a period of 44 hours. Our training machine was a 32-core Intel Xeon E5-2620 server with three NVIDIA Titan XP GPUs,” reported the researchers.
An account of the work also appeara in an Nvidia blogposted this week, “With a Few Twists and Turns, Rubik’s Cube May Unlock a Breakthrough in AI,” written by Tony Kontzer. Here’s an excerpt from that blog including a quote from the lead researcher:
“People complain that deep learning is a black box, and that they don’t know what the network is doing,” Baldi said. “We could see that the network was learning mathematics.”…The research team’s discovery — that a deep learning model can be used to teach a machine how to do math, in this case the algebraic concept known as group theory — is what Baldi called a “small step in the grand challenge of AI.”
Link to Nvidia blog: https://blogs.nvidia.com/blog/2018/07/23/rubiks-cube-ai-breakthrough/
Link to paper. https://arxiv.org/pdf/1805.07470.pdf