Hacker News new | past | comments | ask | show | jobs | submit login
AlphaGo Is Not the Solution to AI (acm.org)
101 points by fforflo on March 15, 2016 | hide | past | favorite | 63 comments



It may not be "the solution", but no one in Deep Mind said it was. It doesn't particularly matter how journalists spin this stuff, as they clearly don't understand it anyway.

By the way, if Monte Carlo search worked, then we would have seen MC beating 9-dan pro's long ago, which clearly didn't happen, so the author's definition of "effective" lacks pragmatic insight - the real limiting factor in this sort of evaluative judgment.

The author doesn't seem to have watched any of the matches of AlphaGo vs Lee Sedol, though, as much of the pre-game and post-game discussion brings to light some important details surrounding specific implementation features in AlphaGo by Deep Mind. Glossing over that makes this article remarkably uncritical and uninformative.

Edit: it's one thing to combat hype, but representing the facts as they are in contradistinction to that hype is another matter entirely. Anti-hype is perhaps more pernicious in bringing about an "AI Winter" than is drumming up interest in what AI, or more accurately AGI, will be capable of in the not-too-distant future. And given current developments, I think the only AI Winter we have to fear is the one where uninformed persons accost us with misguided fears about AI and what its capabilities will be.


It doesn't particularly matter how journalists spin this stuff, as they clearly don't understand it anyway.

It matters because it could lead to another AI winter when the hype falls through.


There's not going to be another AI winter. The past AI winters occurred because people drastically underestimated how difficult AI would be. MIT (Seymour Papert specifically) thought computer vision could be solved by some graduate students as a summer project. Same story for other AI problems, e.g. NLP, speech recognition, general reasoning and inference, etc.

Once the difficulty of those problems started to be understood, of course funding dried up. Industry is focused on short-term ROI, so it's hard to get funding if the profits won't be seen for 50 years.

The difference is that now there's an entire string of profitable markets for solving near-term AI problems. AI is fundamental to the business models of some of the world's largest companies (i.e. Google). There's basically zero risk of an AI winter when we're on the verge of advanced robotics, Watson-style Q&A systems, self-driving cars, large-scale genomics, etc.

An AGI winter is another story, but most AI work isn't really focused on serious, full-scale AGI right now anyway. That winter never ended and is ongoing. Everyone is focused on incremental AI improvements because no one really knows what's required for full AGI, and are in the meantime hoping they'll hit on it while building on known techniques like deep learning, comp neurosci, etc.

tl;dr: As long as investors continue to see marketable products for new AI developments anywhere in the next ~10 years, funding won't dry up again.


> There's not going to be another AI winter.[...] > The difference is that now there's an entire string of profitable markets for solving near-term AI problems.

As someone who experienced AI winter (and ended up leaving the field) and even attended the AAAI panel where the term was coined on stage, I am not so confident.

What happened then looks like it could happen again. At that time there were tons of expert systems being deployed with high promises that they would revolutionize business and practical problem solving. Non-technical people from other fields were flooding in to sell "solutions." Dedicated workstations were proliferating.

The self-driving car hype is already at the same level as the expert systems hype was back then.

> tl;dr: As long as investors continue to see marketable products for new AI developments anywhere in the next ~10 years, funding won't dry up again.

Well sure, that's true, but no different from saying "as long as house prices continue to go up there will be a housing boom."


Also, people are in general really bad at understanding how hard some problems are, particularly in relation to how easy we've made some seemingly magical things.

Reminds me of this XKCD: http://xkcd.com/1425/


AGI Ice Age?


I don't think there will be another winter in a hurry regardless of press hype because there are too many real results - Amazon echo, self driving cars and so on.


Voice recognition and self-driving cars aren't solved problems. That last 10% may be 90% of the work.


Really hard to tell when something is solved, because it's always going to get improved. As Elon puts it with self driving cars: They must be more secure than human level until we can expect broad acceptance. However both technologies are already useful as of now.


I would agree and disagree... within constraints both are very well solved... and that's what the press tends to overstate


Previous winter didnt happen only because scientists underestimated difficulty, it happened because they and the corporations couldnt deal with mediocre results and started selling a LIE. NNs recognising sky color instead of tanks, talking car computers, terrible speech recognition that only worked in presentations with microphone up your mouth, whole Smalltalk fiasco.

It is happening again, "Dear Aunt, Let's Set So Double The Killer Delete Select All", Honda selling lane following and adaptive cruise control labeled as self driving (so does tesla btw), etc. There are plenty of scam products and misleading marketing surrounding AI right now.


Are you implying that AI research in the past decades didn't have any "real" results? This is a popular meta-narrative these days, but it's mostly a product of marketing that tries to magnify the significance of current work at the expense of everyone who worked on similar things in the past.


Do you work in the field?

I agree with the main point about Monte Carlo methods here.

Neural networks are a "dumb" way to stack another layer of optimisation on top of many algorithms.

I won't lie there is a certain bitterness watching this having used the same principles on less gimmicky problems which I'm sure the author shares


Where did the author say that MCTS alone is enough to beat Go? His point was that in Go MCTS seems to significantly improve results (which is true), while for some other games this isn't the case. Therefore an algorithm which strongly relies on MTSC (such as AlphaGo) might not generalise to other games. I don't see anything wrong with this. AlphaGo uses MCTS precisely because it has proved to be effective in Go, at least compared to other search heuristics. It seems perfectly reasonable that a similar approach won't work with other games. Any single component of AlphaGo (like the policy network) can't consistently win against 9p players by itself, but it doesn't follow that those components aren't "effective" or aren't critical to the success of AlphaGo. It fact it is obvious that the opposite is true.


Journalists don't understand almost anything. Still, they are the ones who form the opinion of the general public. So, what they say matters very much. This makes even more important that the people with real authority in some topic correct them when they are wrong, independently of this being the case here or not.


> Anti-hype is perhaps more pernicious in bringing about an "AI Winter" than is drumming up interest in what AI, or more accurately AGI, will be capable of in the not-too-distant future.

AI is massively popular in both industry and academia at the moment. It is at no risk of being deinflated by "anti-hype".


> AI is massively popular in both industry and academia at the moment. It is at no risk of being deinflated by "anti-hype".

Isn't it precisely when something is massively inflated that it's most at risk of being deflated?

Remember those mid-80s films with intelligent computers everywhere? People really thought that AI would be a solved problem within years. Then reality struck and the AI winter fell.


That's a case of deflation due to hype, not due to anti-hype. The whole point of the original article is to avoid overpromising and underdelivering.


Yes. We need more people like Langford and LeCun to push back on the AI hype. It's better for AI researchers to underpromise and overdeliver than the other way around. Otherwise we risk another "AI Winter," and that's no fun.

The challenge is that to approximately 99.99% of the seven billion people on this planet, including roughly every journalist, the recent progress made by AI researchers is indistinguishable from "magic."

When all those people try to reason about "magical things," they're bound to reach nonsensical conclusions and say nonsensical things. To them, debating on the promise of technologies like deep learning is a lot like debating how many angels can dance on the head of a pin.[1]

--

[1] http://www.straightdope.com/columns/read/1008/did-medieval-s...


Undepromise and overdeliver works well when you're building something for a customer, but I suspect when you're trying to demonstrate a technology that has the potential to be scary in the eyes of the public downplaying what it can do is probably the worst possible strategy. The way to take away the fear of the unknown is to make it unsurprising; for the public to accept AI their expectations are going to have to match the reality quite closely. If AI does less than they expect they won't buy it, if it does more they'll see it as an invasion of their privacy and they won't buy it.


Then help scientists and researchers to change the current incentive structure, that heavily encourages over-promising. Simply saying "don't do it" wont help.


Any sufficiently advanced technology is indistinguishable from magic.

And while risking another AI Winter is no fun, it's possible that AI researchers are probably going to be sorely tempted to participate in the hype cycle as well. In fact, considering how much money is being poured into AI research, what is it to say that the hype has already just begun?


Its just blatantly false to suggest that this victory means "games are solved". Games like chess and GO are games of complete information. Its just a matter of brute processing power, even if currently computers don't have the processing power to "solve" them.

Games of incomplete information, like poker, are much harder, if not impossible for computers to master without actual AI. It was only recently that heads-up, limit poker was "solved". Even in this case, the "solution" meant only that the computer could put forth a strategy that would not lose significantly over time, not a winning strategy. Games like no-limit hold em are infinitely more complex. This is true of heads-up, but vastly more so for full-ring games.

AlphaGo's victory has certainly been very impressive, but its also predictable. Given enough computing power, winning any game of complete information is just a matter of brute computing force. We'll see how computers (and real AI) can do in the future in regards to other games.


> Games like chess and GO are games of complete information. Its just a matter of brute processing power, even if currently computers don't have the processing power to "solve" them.

That's like saying public-key encryption is broken because we can brute force decryption.

When we talk about solving a game (or solving an optimization problem, etc), we aren't simply interested in whether the problem is computable. We want a efficient, polynomial-time algorithm if we're interested in the generalized problem (the size of the board is a parameter), otherwise at least something that runs and completes in a reasonable time if we're OK with solving only a particular board size (an instance, in terms of computational complexity).

The generalized go problem is PSPACE-hard or EXPTIME-complete depending on the ruleset you adopt. So it's not just a matter of brute processing power.

Now, most people outside theoretical computer science research don't care much about generalized games and asymptotics, they're fine with solving 19x19 Go instead of n x n Go. But that doesn't help you much: the only board size that has been solved is 5x5 [0], and there are only about 10^10 legal positions in a 5x5 board. The standard 19x19 has 10^170 legal positions! Good luck brute forcing it.

[0] http://erikvanderwerf.tengen.nl/5x5/5x5solved.html


>That's like saying public-key encryption is broken because we can brute force decryption.

No, that's like saying that public-key encryption will eventually be broken because at the end of the day its just about having enough brute force processing.

>When we talk about solving a game (or solving an optimization problem, etc), we aren't simply interested in whether the problem is computable.

No we aren't. But games of complete information that ARE computable, such as chess and GO can be removed from discussion. There is no question that these games of complete information are "solvable" with or without AI, since every move, from start to finish, can be calculated before the game even begins - its just a matter of processing power.

Which brings us to the question of AI and games of incomplete information. By definition full-ring no-limit hold-em is not computable. Even with infinite processing power, with every possible permutation calculated, there is no guarantee that a brute-force computer player is going to win full-ring no-limit hold-em. If there are multiple unknown players to act behind you, there is no way to compute with certainty what the outcome will be. This is where AI becomes really interesting as far as gaming is concerned.


> No we aren't. But games of complete information that ARE computable, such as chess and GO can be removed from discussion. There is no question that these games of complete information are "solvable" with or without AI, since every move, from start to finish, can be calculated before the game even begins - its just a matter of processing power.

In a theoretical universe, sure. In the one that we live in - no.

Let's assume that we can turn every atom in the entire universe that we know of into a 4Ghz processor, further assume that we can perfectly parallelize the search and that we can check one board position in one cycle on one of those processors.

How long would it take to check all possible 19x19 go positions?

(1 * 10^80 processors) * (4 * 10^9 checks) = 10^89 checks per second.

Total number of valid go board states: 10^170, therefore it would still take 10^79 seconds to check all the positions in order to find a perfect game.

Clearly, it is not possible to ever brute-force go, no matter what kind of advances computing technology will take.


This argument surely discounts other computing models, like quantum computing.


Fair enough, if there is ever a quantum entaglement (or other type of course) based solution to go, then I will have been proven wrong. We'll see, but I'm not feeling very worried ;)


> impossible for computers to master without actual AI

Many times before, claims like this have been disproved.


There is a difference between going unproved and being disproved.


> Even in this case, the "solution" meant only that the computer could put forth a strategy that would not lose significantly over time, not a winning strategy.

It's even true for rock-paper-scissors and the "solution" is trivial, and there is no better strategy. The lack of winning strategy can be the property of the game itself. It's entirely not trivial if a "winning strategy" to a game like poker even exists.


> Given enough computing power, winning any game of complete information is just a matter of brute computing force.

If by "brute force" you mean exploring all possible moves then in theory yes, in practice the computing power and time is larger than the universe for chess or go, so no.


Ugh I've been following every AlphaGo discussion I can find, and it's always the same debate about what the significance of it is. I just want to make this final comment.

There is no significance to AlphaGo - to people paying attention to AI research. The methods aren't incredibly novel, the idea that CNNs could make very good value approximators isn't new, etc. I predicted computers would be beating expert Go players by the end of 2015, and was actually really disappointed when I was wrong. Turns out I wasn't, but it had just been kept secret.

However AlphaGo's victory is significant to the general public and the skeptics. It's a sign of the rapid progress being made in AI over the last few years. Rapid progress that shows no sign of halting.

Some of the people I spoke to about my Go prediction thought that it might be possible, but was probably too hard a domain. Hopefully those people will update their beliefs and be a little more optimistic about NNs, and AI in general. They are extremely powerful tools that we are just figuring out how to use. Go will hardly be the last domain they succeed at.


So there is no significance to AlphaGo, besides the fact that it defied your colleagues' expectations about whether these techniques would work in this domain.


> I predicted computers would be beating expert Go players by the end of 2015

And who are you? Predictions are a dime a dozen; what gives any relevance to yours?


The point is that anyone could have predicted it. There were papers being published showing very promising results on Go in 2014.


Anyone could have predicted as in "take a year out their asses and write it down", sure. A dime a dozen. Predicted as in actually know what they're talking about and have some sort of reasoning or data backing that prediction? Nah.


No, there were papers published in 2014 showing that CNNs could predict the moves of expert Go players 55% of the time. That was an extremely remarkable, result. It's not a huge leap to go from that, to using it as move predictor as the search heuristic in a game tree searching algorithm.


I'd like people to make some predictions about what an ai will not be able to do in the next 10 years.


Predict the DOW closing value to within 1 point, 1 day ahead, at 50/50 accuracy.

Predict weather 2 weeks out to within 1 degree at 50/50 accuracy.

Do face recognition well enough to identify people from surveillance videos. (from the set off all known criminals with 50/50 accuracy including false positives and false negatives.)

Program a computer well enough to replace a Jr developer.

Write a book good enough to become popular on its own merits.

Replace in person translators for real time language X to Y langue translation at the UN.


All these might simply be for the lack of access to data not for the ability to predict with the data though.


Relevant: poll on "Will the "silver" Turing Test be passed by 2026?" http://www.metaculus.com/questions/73/


I'll go ahead and predict that we won't have human-level artificial intelligence within 10 years. As in you could throw it literally any task and it would handle it as well as a human and get better at it as fast as a human.


As it was said on BBC today: you can go into any kitchen and make yourself a cup of tea. Good luck with AI doing that.


Which human?


My digestive system isn't the solution to my consciousness either but it's in some ways a part of it.

There is going to be no "solution" to AI.

Only emerging complexity which I am guessing at one point is going to give us something akin a semi aware entity. After that all bets are off.


Who the heck was ever claiming that AlphaGo was an AGI? Straw argument much?


Talking of straw arguments, I don't see John Langford saying in this article that anyone has gone so far as to claim AlphaGo is an AGI. What I have seen is a number of comments on HN kvetching about moving goalposts.


Well a bunch of people were confused and still don't know the difference between AI and AGI.

I actually think it's OK if some people think AlphaGo is more general purpose than it really is, because the general attitude or at least appropriate response to the likely development of AGI relatively soon still seems to be amazingly muted.


There was a time when AI meant AGI. AGI is a retronym induced by premature claims of the achievement of AI.


Idea that AI will be a singular entity/solution/etc. needs to stop, it's like saying humanity only needs a single type of person. Generally speaking, only important measure is when there's not a single person that's able to out think one of many different instances of AI. That day will come, though when is not clear.


The seemingly "poor" intelligence represented from the article, ironically, really weakens what intelligence is. In certain sense, this article really strengthen the importance of AlphaGo: AlphaGo does appear know Go, and the author seemingly have weak understanding of AI and GO, making it a poor intelligent being that is specialized in writing articles. Apparently Go is considered require more intelligence than writing normal articles. So AlphaGo performs well in a task that requires more intelligence, and the author performs poorly in a task that requires less intelligence. Does this make AlphaGo a better intelligent being than the author?


Every accomplishment will seem obvious in retrospect. Who here honestly thought neural networks were going to be a dominant force in board game AI? Sure we cannot extrapolate, but that doesn't mean that we aren't building up effective tools for AI.


"Who here honestly thought neural networks were going to be a dominant force in board game AI?"

The potential for neural networks to learn to play games, by playing each other, has been apparent at least since NeuroGammon (https://en.wikipedia.org/wiki/Neurogammon), in 1989. The "playing each other" mechanism is key, because it allows large-scale training.

The question is, what games happen to be well structured to allow so-called "credit assignment" so that you can know what states lead to wins across a reasonable spectrum of (unknown/unknowable) opponent plays.

According to the OP, it is known that you can do credit assignment in Go. ("For Go itself, it has been well-known for a decade that Monte Carlo tree search (i.e., valuation by assuming randomized playout) is unusually effective in Go.") I did not know that, but if so, it would simplify the Go problem, although the search space remains huge, etc.


Most scientific and engineering advancements are an incremental combination of what has come before. One can make all the predictions one wants, but until someone actually builds something working, it's just a hypothesis.

The key advancement in AlphaGo seems to be generating a subset good moves, as opposed to randomly selecting them, and determining whether a given configuration has a probability of winning. Monte Carlo Tree Search by itself only gets you to amateur ranks from what I read, but won't beat a pro player. Being able to combine the policy and value networks, and figuring out how to bootstrap and train them was where all the work is.

In hindsight, everything looks like an obvious combination of separate already-known-to-be-good pieces. I don't think this achievement should be hand waved away.

More importantly, the DeepMind folks are being pretty humble about it and not making the big claims everything seems to think they're making.


I'm not that knowledgeable about the field, but it does look like similar ideas have been approached previously. Ex., 2011 paper on using NNs and MCTS to play turn based strategy game Civilization II: http://dspace.mit.edu/openaccess-disseminate/1721.1/74248


If there was no prior work that'd be the odder of the two scenarios. But I definitely didn't see anything that would make me think about applying it to abstract strategy games apart from some modest tries. Even during the games the weaknesses were revealed, so the efficacy is still very novel.


> Who here honestly thought neural networks were going to be a dominant force in board game AI?

https://en.wikipedia.org/wiki/Universal_approximation_theore...

Neural networks are an algorithmically simple way to approximate any mathematical function. So, really, why _wouldn't_ you think this?

Ultimately, almost all boardgames can be reduced to mathematical functions.


Well, the Ford T was not the solution to electric, self-driving, suburban commuters either.


Weak AI will never become self-aware. Actually, neither will strong AI unless we are including awareness as one of it's strong traits (some do, some don't).

Awareness is a virtue in and of itself, and any machine that has it will require an implementation of awareness itself, and that is regardless of the machine's intelligence. Awareness is about self motivation and the creating of its own goals. It's the generating of meaning to back one's words as well as actions, as even we often struggle to do.

As far as I am aware, no one at Deep Mind or IBM is actively working on intent and self-awareness. So for the foreseeable future, if a two-legged drone shows up at your door and points a gun at you, it won't be because AI has awakened. It'll be because someone at the CIA wants to kill you.



Didn't some AI researcher have a quote along the lines of "AI is whatever we can't do yet"?


Larry Tesler (source: http://www.nomodes.com/Larry_Tesler_Consulting/Adages_and_Co...)

"Tesler's Theorem (ca. 1970). My formulation of what others have since called the “AI Effect”. As commonly quoted: “Artificial Intelligence is whatever hasn't been done yet”. What I actually said was: “Intelligence is whatever machines haven't done yet”."




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: