Hacker News new | past | comments | ask | show | jobs | submit login
Computer learns to play Civilization by reading the manual (extremetech.com)
95 points by thomas on July 14, 2011 | hide | past | favorite | 15 comments



The source article is way more informative: http://web.mit.edu/newsoffice/2011/language-from-games-0712....



Or better yet, the conference paper: http://people.csail.mit.edu/regina/my_papers/civ11.pdf

Very interesting.


According the the paper, they used Freeciv, because it is open source and they could modify the source code. Their source code is available here:

http://groups.csail.mit.edu/rbg/code/civ/


Not at expert in this area, but its interesting stuff. Obviously, its nothing like the headline of 'computers learns to play by reading manual'. Sounds more like 'manual provides context hints to help guide search'.

I'm not sure the baseline AI of civ is that hard to beat, so its hard to evaluate how good the headline stat of 78% is? Anyone?

The other thing that gives me pause, is that they performed their rollouts against the other AI player. That seems kind of unfair - might be concerned they might be specialising to beat the build in AI, rather than to play the game.

Finally, did they perform their rollouts during a training phase of their model, and then use just the fitted model during evaluation play, or did they continue to use the rollouts during evaluation? I presume it was the former, but I only skimmed the paper, and couldnt find it explicitly?


You make good points, but there are a few factors you have to think about. First, the built-in AI has been designed by people with great knowledge of the game and its strategy. The fact that a totally generic AI system with zero knowledge of the game can learn on the fly and beat the hand-tuned AI is pretty impressive.

They were training online during gameplay, so each round started fresh with an untrained system.


>The fact that a totally generic AI system with zero knowledge of the game can learn on the fly and beat the hand-tuned AI is pretty impressive.

Yes; but, like, Samuel's learning checker playing programs in the 50s and 60s did that.

>They were training online during gameplay, so each round started fresh with an untrained system.

I'm not really sure how that is considered training. Like Tim_Benham's comment, on a first read of the paper, I would be concerned that the AI is beating to inbuilt AI due to what could be termed 'save-game cheating'. And there's still the issue of whether it is overspecialised to just exploit the inbuilt AI in some way.


There's nothing in the paper that makes it clear that their claimed results aren't the result of save game cheating.


Finally, someone learns to RTFM.


Or more to the point: finally some thing learns.


What is the difference? Software is software, differences in hardware are boring and growing narrower.


Unlike lots of people, I RTM.


How do you know the manual affected how well the computer played? IMHO Put a machine learning algorithm into playing enough games and it isn't so unreasonable it would do better than the default AI on at least some games.

They need a control where the computer doesn't have access to a manual and compare to that.


> They need a control where the computer doesn't have access to a manual and compare to that.

Which is exactly what they did: http://people.csail.mit.edu/regina/my_papers/civ11.pdf


They did, and using the manual improved performance considerably.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: