Bit hyperbolic. Go at it's heart is a pretty basic game, just tricky combinatorics. The fact that you can do unsupervised learning simplifies it even more. I think there are much better true indicators, such as better translation, speech to text, auto driving, etc.
I think you are being hyperbolic here. Go is in a lot of respect much more challenging than machine translation, speech to text and auto-driving. All of the latter problems you described have had solutions for decades whereas Alpha Go is literally state of the art. It even uses proprietary hardware (TPU's) and was essentially engineered by one if not the most elite ML lab in the world. Their solution consists of an ensemble of many models including MCTS, supervised learning, reinforcement learning. Much more complicated than what you are thinking. Go read the paper, it's free.
Go is a lot harder than you think for machines to play.
I agree, Go is a challenging game for computers but even more so for humans. However, for computers, Go is not more challenging than translation, speech to text or driving or understanding a comic book. The progression of superhuman play in famous board games has gone as checker->chess-> go. The more a game relies on memory,tactics and evaluation speed, the quicker it falls. In terms of commonly played perfect information games, Go is most difficult both from having a large branching factor and no simple evaluation heuristic. In each of these, before there was superhuman play, there was super amateur human play for decades (including in Go).
Translation still has many failure cases. Speech to text cannot yet handle intonation and auto-driving cannot yet handle driving in places like India. And reading then summarizing a page of a comic book while walking across a room is currently impossible.
You ignored my post entirely. The fact that it took the most elite ML lab in the world to engineer this solution using proprietary hardware never seen before that's orders of magnitudes faster at evaluating than what's available to the rest of us is a testament to how hard it is to beat Go. For a long time it was believed to be impossible.
I am not comparing translation or machine vision to AlphaGo, I am merely pointing out that it comes with a broad set of challenges that you aren't even aware of and is a lot harder thank you think.
AlphaGo can beat the next best go playing bot purely using its neural net ensemble without using MCTS, for example. That's a pleasantly surprising result never before seen, to think that it can beat another bot without doing a single tree search during play and evaluation is also a testament to how impressive it is.
I did not. You said Go is in a lot of respect much more challenging than machine translation, speech to text and auto-driving. I merely pointed out that is wrong because the following exists: superhuman go player and the following do not: superhuman machine translation,speech to text and auto-driving. Go is a perfect information game with no shallow traps. Perfect information means unlike poker, information sets are not cross cutting and as such algorithms can leverage the fact that backwards induction is straightforward.
No shallow search traps and perfect information makes things a lot easier from a computational perspective. Driving at a superhuman level would require a sophisticated forward model from a physics perspective, before even considering predicting other drivers. Speech to text and fluent translation without brittle edge-cases requires hierarchical predictive models that capture long term correlations and higher order concepts. I'm not disputing Go is hard but the hurdles: high branching factor and no evaluation heuristic were the core difficulties. Training via reinforcement in a way that broke correlations which get in the way of learning and integrating roll out with the neural nets (breaking evaluation into value and policy as they did) was the Deepmind's team genius. The roll out and evaluation are what eat up so much electricity.
> The fact that it took the most elite ML lab in the world to engineer this solution using proprietary hardware never seen before that's orders of magnitudes faster at evaluating than what's available to the rest of us is a testament to how hard it is to beat Go.
AlphaGo can run on a GPU, just not (for now) as efficiently as on a TPU. Deepmind is indeed unmatched in output. AlphaGo did build on the 2006 breakthrough paper on tree based bandit algorithms. There was another important 2014 paper on the use of conv-nets on Go. Deepmind did amazing work, but it was not out of nowhere.
And, sure Go is hard. But from a computational perspective, it is still much easier than being able to run up a hill or climb a tree. Humans are just not very good at playing combinatorial games, so the ceiling is low.
> I am merely pointing out that it comes with a broad set of challenges that you aren't even aware of and is a lot harder thank you think.
That is absolutely untrue. I have a decent understanding of the implementation and a strong understanding of the underlying algorithms.
10 years ago when I was learning Go. I can beat the strongest bot within months of learning the rule. Super amateur human play in Go AI is barely 5 years ago, if that.
That doesn't affect my core point: many of the things that humans have commonly associated with intelligence have been the first to fall. In hindsight it makes sense, we mistakenly assumed that there was such a thing as a universal rank of difficulty centered on what humans find hard to reason about.
More to your point, my decades remark had a weaker notion of amateur. For each game, we've had something that could beat most humans for decades. But you're right, that's not a useful distinction.
If we look just at Go the decades remark is somewhat of a stretch. Go has been especially difficult, requiring more intelligent algorithms in order to solve branching and state evaluation (and the latter in particular, is a function too complex to fit in human consciousness).
But progress has been occurring for years. On 9x9 boards MCTS bots have been stronger than most humans since about 2007, 10 years ago. For 19x19 it's true, if we pick 4 or 5 dan as better than most amateurs then that's 6/7 years.
The author isn't hyperbolic. The problems that require what we call common sense continue to defeat all AI's. These board games don't require that. It comes down to pattern recognition, heuristic search, and good hardware. The examples of language and driving are great as the AI's only seem to do well on really narrow situations that require no greater understanding of context. And far as common sense, there's only a handful of projects like Cyc that were trying to teach it to machines.
Humans take 10-20 years of semi-supervised learning to acquire this combination of common sense, knowledge, and problem-solving. It also happens in stages where the infants or especially young children have brains in overdrive taking in everything followed by stages that are more selective about what they take in and solidify. Training AI's to be smart for real, common sense and all, might take over a decade of data for the first one unless this problem can be decomposed. Still will take years of similar experiences.
Modern AI, things like AlphaGo are examples of applied AI. "Common sense" falls within the realm of artificial general intelligence, which is a line of research that's largely abandoned now in favor of applied AI. Modern AI solutions are engineered to solve very very specific problems. You are never going to see attempts to teach "common sense".
With that said, the above is what the world's AI researchers think is possible hopefully within my lifetime using just applied AI without the notion of "common sense".
Common sense is AGI. That's not the goal anymore. The goal is to do things like self driving cars. Both Google and Tesla have placed vehicles on the road that have driven for literally millions of miles.
The idea is to build a bunch of classifiers and regression models and use them together in an ensemble to solve your problem. The same approach is being applied successfully to a lot of unrelated fields where deep learning is concerned.
Also, modern AI doesn't even pretend to be biological in nature, in fact we'll known researchers like Andrew Ng make a point in saying that they are only biologically inspires and that's where the commonalities end.
There are other models like HTM that are way more ambitious and want to come up with a single generalized scheme to solve a broad range of problems, AGI style. These guys think biology is important and are trying to emulate the neocortex. They ARE going for AGI, common sense, etc.
"The goal is to do things like self driving cars. "
Replacing a human driver takes an AGI at least for the exceptional or new situations. It's why we're including it as a counter instead of supporting point.
Nope, not at all. Like I said, we have had autonomous vehicles operating in normal traffic for decades now. Autonomous vehicles do not require AGI, far from it. Waymo, Uber, Tesla and more are all competing to bring autonomous vehicles to the mass consumer market and indeed most estimates claim that we'll have autonomous trucks by 2027: https://arxiv.org/abs/1705.08807
I hate how everyone thinks they know enough to talk about AI because it's so buzzy/trendy right now.
Modern AI is not pretending to be AGI. No one is claiming to be going for AGI, and whatever successes we have been seeing lately have to do with applied AI in solving specific problems, not AGI.
Define normal traffic. Last time I read on it, the autonomous vehicles couldn't handle rain and some other weather conditions without a human operator. Then abnormal situations can still require a human operator. The AI has to correctly detect its own inadequacy and then hand ig over to human who must react in time starting out distracted by whatever they are doing. Safe, automated handling of these situations might take a broader AI thaf understands context.
Note: I'm all for getting data that shows the narrow AI's have been corrected to handle what I described.
I'm not disagreeing because I don't know how to even start comparing those challenges to Go, but just a comment - it's interesting that you chose areas where Google's elite ML teams are state of the art or close to it and where they're probably using TPUs or will be soon.