Google’s Chess Experiments Reveal How to Boost the Power of AI

WHEN COVID-19 SENT people home in early 2020, the computer scientist Tom Zahavy rediscovered chess. He had played as a kid and had recently read Garry Kasparov’s Deep Thinking, a memoir of the grandmaster’s 1997 matches against IBM’s chess-playing computer, Deep Blue. He watched chess videos on YouTube and The Queen’s Gambit on Netflix.

Despite his renewed interest, Zahavy wasn’t looking for ways to improve his game. “I’m not a great player,” he said. “I’m better at chess puzzles”—arrangements of pieces, often contrived and unlikely to occur during a real game, that challenge a player to find creative ways to gain the advantage.

The puzzles can help players sharpen their skills, but more recently they’ve helped reveal the hidden limitations of chess programs. One of the most notorious puzzles, devised by the mathematician Sir Roger Penrose in 2017, puts stronger black pieces (such as the queen and rooks) on the board, but in awkward positions. An experienced human player, playing white, could readily steer the game into a draw, but powerful computer chess programs would say black had a clear advantage. That difference, Zahavy said, suggested that even though computers could defeat the world’s best human players, they couldn’t yet recognize and work through every kind of tough problem. Since then, Penrose and others have devised sprawling collections of puzzles that computers struggle to solve.

Chess has long been a touchstone for testing new ideas in artificial intelligence, and Penrose’s puzzles piqued Zahavy’s interest. “I was trying to understand what makes these positions so hard for computers when at least some of them we can solve as humans,” he said. “I was completely fascinated.” It soon evolved into a professional interest: As a research scientist at Google DeepMind, Zahavy explores creative problem-solving approaches. The goal is to devise AI systems with a spectrum of possible behaviors beyond performing a single task.

A traditional AI chess program, trained to win, may not make sense of a Penrose puzzle, but Zahavy suspected that a program made up of many diverse systems, working together as a group, could make headway. So he and his colleagues developed a way to weave together multiple (up to 10) decisionmaking AI systems, each optimized and trained for different strategies, starting with AlphaZero, DeepMind’s powerful chess program. The new system, they reported in August, played better than AlphaZero alone, and it showed more skill—and more creativity—in dealing with Penrose’s puzzles. These abilities came, in a sense, from self-collaboration: If one approach hit a wall, the program simply turned to another.

Source: https://www.wired.com/story/google-artificial-intelligence-chess/

Antoine Cully has built robots that can effectively brainstorm multiple different solutions to a given problem. COURTESY OF IMPERIAL COLLEGE LONDON

Leave a Comment

Your email address will not be published. Required fields are marked *