Pages

April 16, 2008

Artificial Intelligence milestone - computer beats Go Master in 9X9 game

During the Go Tournament in Paris, staged between 22 and 24 March 2008 by the French Go Federation (FFG), the MoGo artificial intelligence (IA) engine developed by INRIA -- the French National Institute for Research in Computer Science and Control -- running on a Bull NovaScale supercomputer, won a 9x9 game of Go against professional 5th DAN Catalin Taranu. This was the first ever officially sanctioned 'non blitz' victory of a 'machine' over a Go Master.

Although Catalin Taranu beat the computer in a 19x19 configuration with a nine-stone handicap, the Go Master nevertheless rated the IA system as 'approaching Dan standard' in a performance that promises some formidable battles to come between man and machine. Dan standard would be a ranking of 2100 versus a 2830 ranking for a 5th professional DAN player and 2940 for a 9th professional DAN player.

From a paper: Solving Go on a 3x3 Board Using Temporal-Difference by Learning
Choon Ngai Tay
Computer Go is one of the biggest challenges faced by game programmers. One of the reasons that it remains unsuccessful is due to the enormous search space. The size of the search space for a normal 19 x 19 Go board, is estimated to be 10^170 states, whereas the search space for chess is 10^50. The size of the game tree for Go is approximated to be 10^600 compared to 10^123 for chess. However, this is not the main reason as the small 9 x 9 Go board has a search space of 10^40 and a game tree size of 10^85 is also unsuccessful. The main reason is that until now no one has derived an evaluation function that accurately describes the intermediate Go states.


The complexity of different size games of GO at wikipedia.


Size Game tree complexity of average game length
9×9 7.6×10^85
13×13 3.2×10^200
19×19 3 X 10^511
21×21 1.3X 10^661



TAO : Machine Learning and Optimisation site

the University of Alberta helped improve the Go system. The University of Alberta had researchers that helped to solve checkers.

FURTHER READING
A research paper on how the MOGO system works.

Website of one of the french researchers

Computer GO at wikipedia

A research paper on the difficulties of programming GO

Complexity table for many games at wikipedia

Statistics on even GO games (19X19 board)

Professional 4th DAN, 5th DAN, 6th DAN players generally never lose to some 4 rankings less than them. A 6th pro DAN player generally never loses to a 2nd pro DAN or lower player. The better players tend not to lose to even slightly weaker opponents. The 6th pro DAN players tend to not lose to even 4th DAN pro players and only lose 15% of the time to 5th DAN pro players.

4 comments:

philw1776 said...

Chess, Go, etc. are not indications of AI. Huge compute power and very clever specialized algorithims 'hard wired' via FPGAs or other custom circuitry just for those particular algorithms is just specialized number crunching.

I look to the DARPA auto-nav contests for the best embodyments of AI and regretably even they are closer to glorified Go computers than to the navigation 'intelligence' of a horsefly seeking to drill a horse without being swatted.

bw said...

Solving those games are old school AI problems. they are not using mimick human or biological ways of thinking or solving problems.

Anonymous said...

How human searching this space? Probably very effiecently eliminating wrong variants. It is hard to believe that until now exist games, in which human better than computer. I wonder, how much better? If supose computer would be able to "think" trilions years, do it will still loose to man?

Roko said...

I'd agree with philw1776 here. Narrow AI is indeed "old-school".