One Giant Leap for AI, One Small Step for Man

The deed is done. Another intellectual stronghold of humans has gone the way of the dodo. Just last night, Google’s computer program called AlphaGo has won a match against one of the world’s leading Go players.

Less than twenty years after IBM accomplished a similar feat in chess by beating the world champion in a now famous match, with the Deep Blue chess computer. Around that time, I had just finished my Ph.D. and was looking for a job in artificial intelligence (AI). Naturally, IBM was an obvious candidate. I was an active chess player myself when the Deep Blue match took place, and like most players, I was not particularly happy seeing a piece of silicon taking away the grace of chess. I felt especially insulted by the way Deep Blue had won the match. To me, it all came down to a brute-force search for the best move among all possible move sequences, an approach that one would not associate with intelligence at all. I had a discussion with one of the IBM managers who downplayed this brute-force aspect of the problem, insisting that their software had shown intelligent behavior by beating an accomplished human being in an apparently rational activity. While I did not join IBM, I think everybody has to admit that Deep Blue was a major accomplishment.

The game of Go has been more resilient and human players have kept the upper hand for longer. Some say this is because of the larger number of possible moves, the so-called branching factor, which increases the number of potential move sequences to an astronomically high number that is intractable even for modern computers. Others say that the longer dominance of humans in Go was simply because less time and resources have been devoted to conquering Go. Be that as it may, as a matter of fact, AlphaGo works differently than Deep Blue. While it still relies on a tree search component, it also applies a neural network to find candidate moves. The neural network reduces the number of choices in a given position by pre-selecting the best candidate moves, which sets AlphaGo apart from a brute-force exhaustive search. While Deep Blue most certainly used many rules and heuristics to cut down on the number of candidate moves, the use of a neural network strikes me as more powerful. The network that AlphaGo uses has been trained on hundreds of thousands of Go games played between world-class players. Besides, AlphaGo is learning from games that it is regularly playing against itself. So yes, progress has been made, and the latest victim is Lee Sedol, the professional Go player who succumbed to AlphaGo last night.

The artificial intelligence community is now hoping that the approach followed by AlphaGo can be applied to other areas so that this new form of artificial intelligence can help humans instead of just trying to beat them. Deep Blue was very much tailor-made for beating the world chess champion, but outside the chess domain, there was little it could do. Time will tell if AlphaGo will lead us to new horizons, or if it will share the same fate as Deep Blue, which was dismantled shortly after its game-changing match with the world chess champion.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s