AlphaGo

Google DeepMind’s AlphaGo AI system wins first round against top human Go player

Life
Lee Sedol vs. AlphaGo on day one of five-day match. Image: Google

9 March 2016

Google DeepMind’s AlphaGo artificial intelligence system has won the first game in a cliffhanger Go match with top player Lee Sedol, raising expectations afresh about whether machines can be programmed to overtake humans in intellectual capabilities.

The match has been billed as a major challenge by a computer in a game of tremendous complexity, in the footsteps of the now legendary chess victory of IBM’s Deep Blue against Garry Kasporov in 1997 and the 2011 win in the Jeopardy quiz show by Watson, another computer from Big Blue.

AlphaGo, playing through a human assistant to whom it prompted moves from a computer screen, won  Wednesday the first of five games it is playing in Seoul, South Korea with Lee, after the player resigned.  The winner of the match stands to gain $1 million in prize money, which Google DeepMind has promised to donate to charity if AlphaGo wins.

The game, viewed by large number of people online,  was accompanied online by dark humour about the future of mankind in a world dominated by intelligent computers. “Save world Lee Sedol,” one person watching the game wrote on the YouTube chat, while waiting for the video streaming of the game to begin. Another warned of “10 minutes to begin the end of mankind,” ahead of the start of play.

AlphaGo won 5-0 in October against three-time European Go champion Fan Hui, which encouraged Google DeepMind researchers to take on Lee, a South Korean player who has won some key tournaments in the last decade in a game that is largely dominated by South Korea, Japan, China and Taiwan.

AlphaGo played a defensive game in October with the European champion making some mistakes, but in the game in Seoul both players were very aggressive, said Michael Redmond, commentator at the event and a professional Go player, during the course of the game. The AlphaGo program has gone strong since the October game, he added.

Invented in China over 2,500 years ago, the complexity of the Go board game has presented a particularly significant challenge for AI systems because of its “enormous search space and the difficulty of evaluating board positions and moves,” according to researchers.

Players take turns to place black or white pieces on the 19×19 line grid, trying to capture the opponent’s stones by surrounding them and encircling more empty space as territory. Go is a game primarily about intuition and feel rather than brute calculation, making it hard for computers to play well, according to Demis Hassabis, CEO and cofounder of Google DeepMind, a British AI company that Google acquired in 2014.

Google DeepMind claims that AlphaGo’s search algorithm is more human-like than Deep Blue. The chess-playing computer searched by brute force over thousands of times more positions than AlphaGo, which instead looks ahead by playing out the remainder of the game in its imagination, using a technique known as the Monte-Carlo tree search, according to Google DeepMind. The technique used by AlphaGo is said to be superior to previous Monte-Carlo programs as AlphaGo uses deep neural networks to guide its search.

IDG News Service

Read More:


Back to Top ↑

TechCentral.ie