Every accomplishment will seem obvious in retrospect. Who here honestly thought neural networks were going to be a dominant force in board game AI? Sure we cannot extrapolate, but that doesn't mean that we aren't building up effective tools for AI.
"Who here honestly thought neural networks were going to be a dominant force in board game AI?"
The potential for neural networks to learn to play games, by playing each other, has been apparent at least since NeuroGammon (https://en.wikipedia.org/wiki/Neurogammon), in 1989. The "playing each other" mechanism is key, because it allows large-scale training.
The question is, what games happen to be well structured to allow so-called "credit assignment" so that you can know what states lead to wins across a reasonable spectrum of (unknown/unknowable) opponent plays.
According to the OP, it is known that you can do credit assignment in Go. ("For Go itself, it has been well-known for a decade that Monte Carlo tree search (i.e., valuation by assuming randomized playout) is unusually effective in Go.") I did not know that, but if so, it would simplify the Go problem, although the search space remains huge, etc.
Most scientific and engineering advancements are an incremental combination of what has come before. One can make all the predictions one wants, but until someone actually builds something working, it's just a hypothesis.
The key advancement in AlphaGo seems to be generating a subset good moves, as opposed to randomly selecting them, and determining whether a given configuration has a probability of winning. Monte Carlo Tree Search by itself only gets you to amateur ranks from what I read, but won't beat a pro player. Being able to combine the policy and value networks, and figuring out how to bootstrap and train them was where all the work is.
In hindsight, everything looks like an obvious combination of separate already-known-to-be-good pieces. I don't think this achievement should be hand waved away.
More importantly, the DeepMind folks are being pretty humble about it and not making the big claims everything seems to think they're making.
I'm not that knowledgeable about the field, but it does look like similar ideas have been approached previously. Ex., 2011 paper on using NNs and MCTS to play turn based strategy game Civilization II: http://dspace.mit.edu/openaccess-disseminate/1721.1/74248
If there was no prior work that'd be the odder of the two scenarios. But I definitely didn't see anything that would make me think about applying it to abstract strategy games apart from some modest tries. Even during the games the weaknesses were revealed, so the efficacy is still very novel.