DeepMind's mission is to build AGI. I think it's probably good if they have a buffered lead on all other efforts. That concern probably weighs on decisions about releasing code.
...the main points are: there may be safety considerations along the way that are costly. More "capitalistic" organizations may decide to shortcut those costs because of the winner-take-all scenario. DeepMind is at least nominally very committed to safety.
Releasing AlphaGo's source code would probably reduce DeepMind's buffer, which in theory, would also reduce safety.
That would require some radically inconsistent thinking on their part. DM does occasionally release source code and trained models for other things, and the arms race logic (https://www.fhi.ox.ac.uk/wp-content/uploads/Racing-to-the-pr...) would even more strongly argue for not releasing anything, even research (they're privately owned, they don't have to publish squat), and especially not running stunts like the AlphaGo tournament which cost millions of dollars in order to terrify and impress competitors and heat up the arms race.
A more parsimonious explanation is simply that it's great PR to maintain rigid control over the family jewels and dribble out occasional sample games and bits and pieces while pretending to be generous. (No one has ever accused Hassabis of being bad at PR or not knowing how to milk the media.)
The rationale for why a buffer would be good is described by Demis Hassabis here: https://youtu.be/h0962biiZa4?t=11m24s
...the main points are: there may be safety considerations along the way that are costly. More "capitalistic" organizations may decide to shortcut those costs because of the winner-take-all scenario. DeepMind is at least nominally very committed to safety.
Releasing AlphaGo's source code would probably reduce DeepMind's buffer, which in theory, would also reduce safety.