Hacker News new | past | comments | ask | show | jobs | submit login
Ray Kurzweil Says He’ll Get ‘Unlimited Resources’ to Work on AI at Google (betabeat.com)
92 points by iProject on Jan 3, 2013 | hide | past | favorite | 113 comments



This strikes me as an exceedingly poor investment. Granted it's an area he's almost religiously devoted to but I have seen no evidence whatsoever of any kind of RESULTS from him in this field.

If it were me choosing who to fund, I would make my main criteria who has actually made some progress, has solid models that show promise and not base it on who's the most willing to make tons of unfounded predictions.


God damn it... Not this again.

=================MY PREVIOUS POSTS=================

I don't think that's a very fair assessment of Kurzweil's role in technology.

He was on the ground, getting his hands dirty with the first commercial applications of AI. He made quite a bit of money selling his various companies and technologies, and was awarded the presidential Medal of Technology from Clinton.

As I was growing up, there was a series of "Oh wow!" moments I had, associated with computers and the seemingly sci-fi things they were now capable of.

"Oh wow, computers can read printed documents and recognize the characters!"

"Oh wow, computers can read written text aloud!"

"Oh wow, computers can recognize speech!"

"Oh wow, computer synthesizers can sound just like pianos now!"

I didn't realize until much later that Kurzweil was heavily involved with all of those breakthroughs.

===========================================

In addition, I'd rank Minsky, Larry Page, Bill Gates, Dean Kamen, Peter, Norvig, Rafael Reif, Tomaso Poggio, Dileep George, and Kurzweil's other supporters as much more qualified to judge the merits of his ideas, than Kurzweil's detractors like Hofstadter, Kevin Kelly, Mitch Kapor, and Gary Marcus. It seems that Hofstadter is the only one of that group who is really qualified to render a verdict.

http://howtocreateamind.com/

=====================================

Here's Peter Norvig himself on Kurzweil:

“Ray’s contributions to science and technology, through research in character and speech recognition and machine learning, have led to technological achievements that have had an enormous impact on society – such as the Kurzweil Reading Machine, used by Stevie Wonder and others to have print read aloud. We appreciate his ambitious, long-term thinking, and we think his approach to problem-solving will be incredibly valuable to projects we’re working on at Google.”

and him effusively praising Ray at a Google Talks event: https://www.youtube.com/watch?v=zihTWh5i2C4 - @1:25


You sound like a Kurzweil press release.

People always list his accomplishments in this fashion. They state some things that he was "heavily involved in" which are impressive when they sound as if he alone created them in a vacuum.

The reality is a lot of different people were working on a lot of these same things, and it's not as if everyone agrees that Kurzweil was the first to accomplish most of these, or even that he indeed did accomplish them in a satisfactory manner.

Yes, the Kurzweil keyboard was very popular and groundbreaking, but "the first to sound just like pianos"? No. They did not sound "just like pianos." They sounded more like pianos than the keyboards that came before, and less like pianos than the ones that came after.

This is the problem with Kurzweil (and others like him), he paints himself as an iconoclastic, groundbreaking figure; standing alone among a sea of might-have-beens. But the reality is that he was just one of many people working in a field which has had a lot of impact in our lives, and which he likes to wax ecstatic about.

The reality is that his main job for the last 20 years has been making a litany of ridiculous predictions nearly non-stop and then when some of them hit anywhere close to the mark he and his supporters claim that he was visionary.

It's absurd and it annoys me to no end.


I really wish you would read my comment further down the thread about not being a Kurzweil fan boy (I've copied it below for your convenience.)

Kurzweil's theories deserve scrutiny and thoughtful criticism, just not the emotional middlebrow dismissal you're engaging in - to borrow PG's favorite expression.

I guess it's possible that Kurzweil is The Talented Mr. Ripley of the technology world, and that your "horseshit pattern recognizers" are just that much more astute than those of Gates, Page, Brin, Minsky, Norvig, Wolfram, Kamen, Reif, etc., but I hope you'll understand if I'm leaning a bit more towards their take on this one.

Also, I realize endtime is about to jump in here any minute now with his Argument from Authority wikipedia link, so I'll freely admit that given imperfect information, yes, I'm allowing people I respect, who are extremely knowledgeable about the relevant subject matter, and in many cases have worked closely with Kurzweil (Page, Wolfram, Gates, Norvig, Reif, George, etc.) - to help confirm my own fairly well researched opinion.

===========================================

Honestly, I think you're barking up the wrong tree here. Even his strongest critics generally concede that he's a brilliant man who has contributed significantly to technology.

Please understand, I'm not a Kurzweil fan boy. I do think a lot of his conclusions are inevitable in the long term - say 100 years out - but I'd guess it's 50/50 whether or not his timelines are too optimistic to benefit any of us here personally.

What I'm reacting strongly to here is the knee-jerk dismissal of his credentials and accomplishments by some on HN due to their incredulity towards his later ideas. It doesn't encourage an informed debate, and quite frankly smacks of a religious or political argument - which admittedly many of his supporters are just as guilty of.


I love how you keep falling back on name dropping as if I'm supposed to suddenly be ashamed and let go of my own long held opinion because it goes against that of rich/famous people.

You could tell me that literally every other person on the planet (never mind people who are wealthy and successful in tech) thinks Kurzweil is the second coming, and it would not change my opinion of him.

Now why is that? Is it just because I'm obstinate?

No. It's because I've been reading his asinine screeds for decades and reading the fawning baloney that comes from talking heads like those you've listed, and adoring groupies like yourself as well. And it's all a bunch of horseapples.

If anyone were doing what he's been doing for the last 20+ years and getting the same response from the press and public, I would be annoyed. It doesn't matter who it is.

It's the act of spewing out a bunch of half baked theories as if they are revolutionary and thought provoking when it's really a bunch of trite garbage which wouldn't even pass for a low grade scifi novel. Then the fact that a number of people -- including the poorly informed general media which you expect to be gullible -- most disappointingly, some tech folks, treat it seriously is beyond obnoxious.

Anyway, clearly I'm not going to change your mind. You're very convinced by the sheer fact that other rich and famous people think he's great, so I'm not going to waste my time any further. You're not going to convince me either (since the only thing that would convince me is Kurzweil actually putting out something thoughtful and well considered), so let's let it go.


What does it mean to say that Kurzweil was "heavily involved" in those breakthroughs? Did he contribute in a significant way to the science? Did he commercialize pre-existing ideas from academia?


Honestly, I think you're barking up the wrong tree here. Even his strongest critics generally concede that he's a brilliant man who has contributed significantly to technology.

Please understand, I'm not a Kurzweil fan boy. I do think a lot of his conclusions are inevitable in the long term - say 100 years out - but I'd guess it's 50/50 whether or not his timelines are too optimistic to benefit any of us here personally.

What I'm reacting strongly to here is the knee-jerk dismissal of his credentials and accomplishments by some on HN due to their incredulity towards his later ideas. It doesn't encourage an informed debate, and quite frankly smacks of a religious or political argument - which admittedly many of his supporters are just as guilty of.


I just want to know what that contribution was exactly.


Go to the 'Life, inventions, and business career' section here, as well as the overview section:

http://en.wikipedia.org/wiki/Ray_Kurzweil


I read that, and I'm still not sure what you're talking about. https://en.wikipedia.org/wiki/Ray_Kurzweil#Recognition_and_a... is a pretty good summary of his accomplishments. He's had a full life, no question, but how much of it was AI? His biggest AI accomplishment seems to be in OCR technology with the Reading Machine back in the 70s. Other than that his strengths seem to be in commercializing inventions, and in attaching his name to his products.


The section you referenced is thin and poorly cited. I still don't know his specific role and contributions. I am trying to dig deeper than the veneer.


As founder of Kurzweil Computer Products, Inc, Select College Consulting Program, Medical Learning Company, etc., one would assume he was some combination of visionary, manager and (very occasionally) programmer. You assume he will be something similar at Google.

http://en.wikipedia.org/wiki/Ray_Kurzweil


Medal of Technology from Clinton

Political awards don't mean anything, just that you have good PR. Hilary Clinton has a Grammy, Barack Obama has a Nobel Peace Prize, and Al Gore has an Oscar. Nate Silver is doing things publicly in statistics but that doesn't mean he's the end-all be-all.

Some of us don't buy into the Kurzweil hype. Frankly, I think he's delusional. But my mother always said that there's a fine line between an idiot and a savant.


Eh, I'd tend to agree with you in general, and that was probably the weaker of my arguments, but the recipient list is pretty impressive: Gordon Moore, Robert Noyce, Gates, Jobs, Wozniak, David Packard, etc:

http://en.wikipedia.org/wiki/National_Medal_of_Technology

It's definitely not a Cracker Jack prize.

He's also racked up a pretty impressive list of Inventor of the Year and Lifetime Achievement awards from more technically astute sources like MIT, Carnegie Mellon, etc.. http://en.wikipedia.org/wiki/Ray_kurzweil - under Recognition and awards

I guess the main point I was trying to make is that he's definitely not this dilettante interloper that some of the more off-base commenters are alleging here.


http://en.wikipedia.org/wiki/Argument_from_authority is still a fallacy, even if you said it a while ago.

If Google wanted to build AGI, we'd hire Eliezer Yudkowsky and Luke Muehlhauser. Kurzweil is an evangelist and a public figure, but do you honestly think he is actually going to design an AGI?

(disclaimer: I work for Google, but this is all my own unofficial opinion.)


As far as I'm aware, neither Yudkowsky nor Muehlhauser have built anything, and were primarily bloggers before getting involved with the Singularity Institute. How about Ben Goertzel, if you really want an AGI?


Google has plenty of builders. AI is an insight problem. (I probably should have said "design" rather than "build" in my previous comment.)

That said, I did nearly including Ben Goertzel in my comment. But I am familiar with Eliezer and Luke's work (and know them personally); I'm much less familiar with Ben, so I didn't feel I could make such a strong statement about him.


I tend to agree with you. I am not aware of any major contributions he has made to AI and his recent work on the human brain is shallow and unscientific.

However, Google is not dumb. They have Peter Norvig heading Google X and a team of AI engineers recruited by Norvig. Perhaps Kurzweil is just the hand guiding the ship, but not the one pulling the oars?


Norvig is a very very smart guy, however he wrote the textbook on a period of AI that is very far removed from the biologically inspired machine learning techniques (Hinton, Hawkins) that I would classify as cutting edge and 'most likely to succeed.' His decision to work with Hinton lately I think is a testament to his intelligence.

Kurzweil has recognized the value of Hawkins' ideas and based his latest book on them, though I and others (http://www.newyorker.com/online/blogs/books/2012/11/ray-kurz...) don't think he did a great job on it.

Full Disclosure: I work at Numenta (Hawkins company) but these opinions are my own.


From the New Yorker article:

Even more disappointing is the fact that Kurzweil never bothers to do what any scientist, especially one trained in computer science, would immediately want to do, which is to build a computer model that instantiated his theory, and then compare the predictions of the model with real human behavior.

This is quite damning, if true, and reconciles the apparent discrepancy in this discussion between "Kurzweil, brilliant inventor" and "Kurzweil, futurist hack." Is it possible that Kurzweil, at some point, stopped being an implementer of ideas and became a pure pontificater?


If so, it looks like he's decided to go implementing again. Have you heard that he got hired at Google to do AI development work? I think it was mentioned on Hacker News at some point.


You make an excellent point and I agree with you. The natural route seems most promising. Maybe they can steal Hinton from the University of Toronto and have the whole package?

Thanks for bringing Numenta to my attention. It appears to be a very interesting company.


> However, Google is not dumb.

They did buy Motorola Mobility.


Well, in the 'they did for the patents' defense, it is hard to quantify the value there. We also need to wait and see if they have more plans for it. Google buying youtube was considered dumb until quite recently.


If you haven't seen any evidence of results from him, then you haven't actually been looking. Because his results speak volumes.

For example, the Kurzweil K2000 synthesizer illustrates exactly how he approaches problem solving - brute force - 64 MB of memory in 1991, utterly wretched price/performance ratio but awesome performance if considered on that dimension alone.

And the above is 100% consistent IMO with google's approach to solving problems - throw hundreds of thousands of computers at it. He should make an excellent cultural fit. Because today's ridiculous brute force is tomorrow's low-end cell phone.


Agreed. I think this hire by Google is largely for PR.

If Google was very serious about A.I. I think they would have just acquired Jeff Hawkins' Numenta and his neuroscience crew, who have real, practical "brain-building" experience.


Give it time. It took them a bit to hire the Stanford team who won the DARPA self-driving vehicle contest after it was done.


That assumes that Numenta's principals would have been willing to sell. You can't take that for granted.


But since it's not my money I'm interested to see what comes from this. It'll probably be interesting even though it certainly won't be anything like what he's constantly promising.


I'd be more than happy with "just" better speech to text. Solutions for that problem is abysmal, especially for non-english languages.


In terms of the single thing that could work better that would make the largest number of my "cool ideas" possible I completely agree. That would be fantastic.

I think the focus away from training these systems for your own voice is probably a bad direction but I'm not an expert in this stuff.


> This strikes me as an exceedingly poor investment. Granted it's an area he's almost religiously devoted to but I have seen no evidence whatsoever of any kind of RESULTS from him in this field.

Sounds like investors talking about Columbus in 1491.


From Wikipedia:

Ray was the principal inventor of the first CCD flatbed scanner, the first omni-font optical character recognition, the first print-to-speech reading machine for the blind, the first text-to-speech synthesizer[citation needed], the first music synthesizer capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed large-vocabulary speech recognition.


I think a lot of people just think of Ray Kurzweil as being like a "tech blogger" of the scientific community - someone who just likes to talk about future stuff, but not someone who's actually an inventor - which he is.


I don't know anything about the details of his contributions to those devices but I can't help but think that he is a charlatan whenever I see him speak; I would be interested in independent corroboration. He hasn't seemed to have made any progress on anything in the passed 10 years.


yeah but... what has he done for me lately? /jokes


It's important to note that none of these achievements are AI-related (apart from character recognition, but that's a terribly shallow form of AI).

What's more, none of these are particularly impressive from a technology point of view, what's interesting are just the product ideas.


What do you mean "from a technology point of view"? Perhaps it doesn't seem impressive to you because you've grown up in a world that takes these achievements for granted.

Your statement is like saying calculus isn't an impressive mathematical achievement because any moderately competent math major in college could recreate it.


Well, given that we had CCDs and photocopiers, which bit of the flat-bed scanner is that impressive?


Why are you focusing on the photocopier?

Also, yes, those things existed, but there was no solution available that was so affordable and reasonable for personal use.


> none of these are particularly impressive from a technology point of view

It's the "first" part that somewhat matters more and then it become's a question of whether he's more Edison or Tesla ( crazy marketing or crazy smart).

Also he's been put into a Director role, so his technical merits come second to his ability to find, understand, and cultivate the smart people needed for his department's goals.


Yeah, my impression is that the level of his title is fairly irrelevant. The whole thing is just to give him the ability to hire a team and have access to Google resources, from which they're hoping he'll produce something good.


One point in my life I worked for a third tier search engine provider with 160-210 million unqualified identities.

Basic search term aggregation was illuminating. If Ray K. can go another step beyond statistical analysis, the results could be fascinating... or it could just be "People aren't wearing enough hats" of a let down.

edit: identities, not impressions


none of these are particularly impressive from a technology point of view

Well, not now.


People who have the autonomy and backing to do "real AI" and have those kinds of achievements don't change jobs often. If someone has real accomplishments in AI, that person's going to be very hard to hire away, because quality jobs are rare.

For all the hullabaloo, Director isn't that high at Google. He's not being hired as a VP (there are hundreds of those) or Google Fellow. It's not a bad gig, and I'm sure he'll get to work on interesting stuff, but I would have imagined a guy of his caliber getting more.


Google scares me more than other companies because of their handle on AI. I wish they at least appeared to take existential risk around AI seriously.

Having Kurzweil near the top isn't exactly comforting, either.


Existential risk around AI? Really?

That's just multiplying through by -1 the typical nerd wet dream about the Singularity.

I can't even keep Rails, Apache, or git from occasionally shitting the bed; I am not worried about AI just yet.


Yes, existential risk around AI. Really.

You can look at it that way, but I think it is currently far more likely than the "typical nerd wet dream about the Singularity".

Your last sentence is a complete non sequitur to me. You are comparing the output of a large, AI focused, for profit organization to an individual of unknown skill using some very specific open source tools.

I am not losing sleep over it, but I definitely think about it from time to time.


mej10 you sound like someone singinst associated : ]

also not being able to keep git etc. running is some evidence about skill level


Existential risk around AI? Really?

I don't think it unreasonable to brace for the case where some Skynet wannabe launches all nukes not because of a carefully considered decision motivated by hatred for humanity, but simply because it shat the bed.


While I completely and wholeheartedly agree with your sentiment, and think fear-mongering is just that, the interesting thought experiment is thinking what technology would look like for something that _didn't_ shit the bed. Or when it did, it fixed itself. That, plus a little bit of intelligence, could (would?) be scary.


That's just multiplying through by -1 the typical nerd wet dream about the Singularity.

Well said. What is it that AI is supposed to do that's so scary, anyway?


What scary things do humans do, to each other and to lesser species? What amazing things do humans do? What concept lets humans do anything noteworthy at all? Here's a nice essay on the power of intelligence: http://yudkowsky.net/singularity/power


Since when does intelligence = human intelligence ?

You would think an intelligent person would know this.


It doesn't. Hence the curiosities and concerns about general AI, non-human intelligence, hopefully created by us humans, the first intelligences intelligent enough to design their successor instead of letting a weaker non-intelligent design process such as evolution happen upon it by chance.

The site I linked to has some good introductory material to the related concepts. (http://yudkowsky.net/singularity/intro is short and to the point.) http://singularity.org/research/ lists some important texts to read, and following up on the authors will lead to more. (An author I will also recommend following for his practical pursuits is Ben Goertzel. Lastly, I think this is a good criticism of the Singularity Institute's general mode of operating: http://kruel.co/2012/11/03/what-i-would-like-the-singularity... But you should probably read that one last, if you intend to read anything at all beyond the simple introduction.)


Depending on how you define AI, anything between nothing and everything.


Are there any indications that they are not taking AI risks seriously?


To my knowledge they haven't published anything around AI risk or released any statements/positions publicly.

That doesn't mean that they aren't taking it seriously, but it is at least a strong signal to me.


Best I can find: http://www.pbs.org/newshour/bb/business/july-dec02/google_11...

SPENCER MICHELS: Sergay Brin thinks the ultimate search engine would be something like the computer named Hal in the movie 2001: A Space Odyssey.

SERGEY BRIN: Hal could... had a lot of information, could piece it together, could rationalize it. Now, hopefully, it would never... it would never have a bug like Hal did where he killed the occupants of the space ship. But that's what we're striving for, and I think we've made it a part of the way there.


Funny, because Hal didn't had a bug. It did what it did interpreting the orders that humans gave it. Just unintended consequences.

Which is kinda the point about AI safety. We don't really know what we want, we sure as hell don't know how to formalize it, and we're lucky that we don't know how to create a real AI because otherwise it could surprise us in a very ugly way, just lawfully following the plan we gave it.


"hopefully"

Oh. Not rules then? Well, what rules. How about giving it a rule "children must not die". Apparently we're big on that one. Except, wait, we mean "US children must not die", so we can still bomb foreign suburbs where children are known to exist. But then we'll need "children must not die on US soil" because it is still possible that the union of (isa US minor citizen ?x) and (isa-resident baghdad ?x) is non empty and the AI would be unable to act. Of course then we need, "US children must not die except in a car", because otherwise the AI would have to destroy cars.

Ok, so lets abandon rule based AI altogether and model our AI on biological processes like humans. Well humans would never cause mass suffering of other humans. We have laws against it. Oh.

"hopefully"

"Let's play global thermonuclear war!"


The only winning move is not to program AI?

Here's an interesting discussion about AI, rules, morality, etc. http://www.reddit.com/r/Futurology/comments/y9lm0/i_am_luke_...


You say things like "google scares me", and "kurzweil ... isn't exactly comforting". Is this a bad thing?


It's obviously bad from his perspective.


I think his perspective is entirely reasonable. Take one small example:

http://wiki.lesswrong.com/wiki/Paperclip_maximizer


That assumes that the maximizer has the basic abilities and tools to modify its intelligence and/or launch a paper-clip manufacturing campaign. If you just hook up its outputs to a web server, it's going to take a mighty impressive AI to destroy the world.


Oh the AI-in-a-box argument... Ever read this, by chance? http://yudkowsky.net/singularity/aibox


Yes, and the results are not encouraging. Frankly, even a 0.05% rate of box-release per incident would terrify me. Expanded over a long enough timeframe and enough attempts, 0.05% (or any easily type-able number) becomes 100% release rate.


> If you just hook up its outputs to a web server, it's going to take a mighty impressive AI to destroy the world.

All it would take is for the AI to just be smarter than you. Human psychology is hackable too. Cf. AI box experiment.


Intelligence creates tools--that is its nature. Modifying intelligence is easy--just fork a process--now you have two minds working on the problem. I'm human and I could hack into other machines to gain their processing power. Even if you don't put it on the web, it could probably talk you or someone in the lab to do so by promising stock market riches, or to cure the janitor's kid of cancer. All of these things would be behaviors that further the paperclip goal.


You can't directly modify your intelligence. You can only do so indirectly using tools made possible by the physics in which you are embedded. Why assume that the framework around the AI is more permissive? Forking a process, initially anyway, only means that two identical minds are working on the problem at half-speed.

And I was, in fact, considering "AI in a box"-style scenarios. My point remains: it would be a very impressive AI that could break out. Presumably it at least needs some significant basic resources, memory at least, and CPU to do it in reasonable time, to do deep human-hacking. How can it convince us to give them to it if it doesn't have them to begin with?

If we're assuming we can control its goals at all (if we can't we're already doomed by virtue of trying), will it even occur to the thing to act beyond giving direct instructions to humans? Will it have enough concept of self to conceive of escaping, to say "I think therefore I am"? Can it even realize what kinds of computational resources it needs and how to ask for them?

There may be good answers to these questions, but I don't believe they're trivial.


That may just be the aluminum from his tinfoil hat seeping into his brain.


More people die of work from Artificial Stupidity risks: second-hand smoke, open-plan offices, depleted glucose leading to bad eating choices, et al.


I am not sure I understand your point.

More people, almost by definition, currently die from every other cause than x-risks.

Google is in the best position of anyone/any company to cause an AI disaster. That they don't _seem_ to care is what bothers me.


Google is in the best position of anyone/any company to cause an AI disaster.

I imagine the CIA and/or defense contractors are in a better position- they have a ton of money, they work in AI... oh, and they also tend to equip their projects with a ton of destructive weaponry. Which gets deployed in foreign countries, sometimes without permission.


Yes, once I see a Google-powered Reaper drone, I will be more concerned. Until they actually create a weaponized product, it's all handwaving.


To save ericb a third relink: http://wiki.lesswrong.com/wiki/Paperclip_maximizer

The key quote is at the very top:

"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." --Eliezer Yudkowsky

I wonder how EY feels when he reads threads like this. Maybe he just ignores them these days?


> Google is in the best position of anyone/any company to cause an AI disaster.

Are you serious ? Let's not overstate Google's position in this world.

They have a search engine, mobile platform, advertising platform and a bunch of popular websites. That's it. All of which have been done before for a decade now. Nothing they do directly influences whether people live or die.

I would be far more concerned about something like IBM Watson used in health care situations.


> They have a search engine, mobile platform, advertising platform and a bunch of popular websites. That's it. All of which have been done before for a decade now.

Also best and brightest of engineers and AI researchers, shitload of money, culture of ambition and a clear intent to go after AI tech. I haven't heard of any other company like that.

> I would be far more concerned about something like IBM Watson used in health care situations.

I'd actually welcome it with open hands. It's about time for some automated diagnosis.

EDIT I'm sorry but I do have to address this point.

> Nothing they do directly influences whether people live or die.

Shut down Google and see what happens. Huge sectors of economy depend directly on search ability they provide, not to mention how many people are now using GMail. Moreover: do you know the addresses of nearest hospitals? Phones to medical specialists? How do you navigate around places? Without Google we'd be all back to Yellow Pages.

If there's one company humanity really grew dependent on, it's Google. Yes, you could probably replace most of the services if needed, given enough time, but the fact is they're the best out there right now, we're all using them and they definitely influence our lives and deaths.


>Also best and brightest of engineers and AI researchers, shitload of money, culture of ambition and a clear intent to go after AI tech. I haven't heard of any other company like that.

IBM ? Who also have been doing it for decades and are far ahead of Google from all indictions.

>I'd actually welcome it with open hands. It's about time for some automated diagnosis.

My point was that IBM Watson if it made wrong decisions could directly influence health care outcomes i.e. life or death. Nothing Google does is comparable.

> Shut down Google and see what happens. Huge sectors of economy depend directly on search ability they provide, not to mention how many people are now using GMail.

Google is not the first or last search or email company. We would simply switch to Bing and Yahoo and the world moves on. Or have you never heard of Altavista, Excite, Lycos ?

> Without Google we'd be all back to Yellow Pages.

Hilarious you mention that since that's where Google gets its worldwide Local/Places search content from. So in fact we are already using Yellow Pages.

>If there's one company humanity really grew dependent on, it's Google.

Humanity doesn't depend on Google. Get a grip would you.


I would argue the risks involved in creating a "real" AI are present regardless. Not to re-link, but this example is even more apropos to your question than my other answer in this post.

http://wiki.lesswrong.com/wiki/Paperclip_maximizer


AI is just smarter software at the end of the day. The risks are still far greater for a simple control system used in a Boeing 777 or heart pacemaker than for an AI search engine.

It's all about Risk Management 101: Risk = Impact x Probability.


Right, when the impact is calamity, and the probability is high, risk is astronomical. Which is why I'm bothering to make this point in the first place. See the other arguments in this thread as to why, with an intelligent AI, what it starts out controlling is unlikely to have any bearing on what it is capable of gaining control of... In risk management terms this is about as serious as it gets.


That Artificial Stupidity is more deadly than AI. (Perhaps this is because I don't see AI as an existential risk, unless it is used toward bad ends such as in war.)

We're not perfect as humans, but I don't think we're stupid enough to build something we can't control. We don't even know what consciousness is, much less whether artificial consciousness is possible.

I do know that it would save lives to liberate humanity from drudgery and stupidity.


I haven't seen any reasons around why an AI must be conscious to cause harm or existential risk. My opinion is that an x-risk causing AI would almost certainly not be conscious like humans.

As for human stupidity, maybe you just have more faith in humans than I do (but I am not entirely pessimistic). Let me ask a related question, to try and gauge this more deeply: do you think anyone will ever release an engineered pathogen into the wild?


The detractors are few when the discussion is about IBM's Watson. It "competed against" the best Jeopardy players and is now said to be "going to medical school."

http://bits.blogs.nytimes.com/2012/10/30/i-b-m-s-watson-goes...

But it's still a machine. Albeit with very sophisticated and very useful capabilities.

And as far as I know, has not expressed self-awareness.

Yet given the fundamental nature of how Watson works it would seem logical to think that Google might one day have to compete with IBM in a number of areas where Watson excels -- medical diagnosis for example. Or worldwide financial analysis.

To do so, Google will need strong voice recognition and text-to-speech capabilities which Ray Kurzweil can help with immediately.


My hunch is that voice recognition and text-to-speech are the least of the concerns that Kurzweil will be dealing with. My guess is that his insights would be more applied to semantic reasoning using massive datasets (i.e., the Web), to build the engine that does the reasoning and answering rather than just the input/output layer.


The problem with giving truly gifted inventors all the resources they want is that they often end up inventing Segways.

The kind of desperate, burning passion that Wozniak had when inventing the Apple computer doesn't seem to be something that any established company has ever been able to intentionally recreate.


Segway and all that Bell Labs stuff...


That actually is a good point. The thought crossed my mind while I wrote the above followed instantly with "well Bell labs, and to some extent IBM in the early days was different." I can't quite put my finger on why. Some sort of constrained, institutionalized invention in which a culture of talented but not "rockstar" inventors were allowed to simply sit around and try to impress each other (this was probably somewhat accidental), but at the same time were expected to be generating innovations and filing patents (this was the intentional part).

There was a fairly lively discussion (either here or on slashdot) a few years ago about whether or not Google with its 20% rule could be the next Bell Labs. Driverless cars and Goolge glasses are cool, but it wasn't quite the landslide of cool everyone was hoping back then.

It might just come down to the difference between "create a world changing technology for us, cost is no object" and "go work on whatever you think is cool, and don't worry about the bills".


I can't upvote this hard enough


>Google already has goldmine of info about us; knows where we are, interests, friends, reads our mails before we do. What if it understands?

I do not think that I am alone when I say "please, for the love of god, don't let this happen."


No, you're not alone, that's why I try to put my eggs in several baskets, some of them being my own.


I appreciate the comments from sceptics, but I look forward to seeing what progress they make. I hope to see a lot of progress using deep neural networks, HMMs, etc.

I remember bringing a bumper sticker "AI, it's for Real" home from the 1982 AAAI conference and putting it on my car, so I may a little biased.


The number of cores Google has in their server farms is such a large number you need to use exponential notation.

To say it's an AI researcher's paradise is putting it mildly.


His resources will grow exponentially!


I doubt that Kurzweil has truly been granted "unlimited" access to Google's resources. His title of "director of engineering" sounds impressive, but there's so far little indication he will be a very senior figure at the company.


Considering the resources given to something as standard as a team working on a better search algorithm, which is thousands of servers, I'm pretty sure he's going to have access to pretty much whatever he wants.

A petabyte of memory? Two hundred thousand cores? Done and done.


Yeah, is he allowed to hire absolutely anyone he wants?

Google has billions in the bank and can afford to be silly for quite some time, but probably not unlimited silly.


I'll enjoy the ensuing hilarity within google if "Unlimited Resources(tm)" leads to a request for a datacenter full of GPUs...

Just sayin'...


Not to mention if anyone will have the resources to create and host the singularity, it would be Google.


sorry to be off topic, but the article used this word 'breathless.' I've noticed this word constantly appearing since about 18 months ago. Prior to that, I don't think I encountered it much except in reference to The Corrs' pop hit Breathless (which drew heavily on an 80s Belinda Carlisle track). Am I the only one puzzled by its sudden surgence?

Sidenote: weird how 'surgence' isn't a word in the dictionary


This looks like a lot of "20% time", concentrated in one person.


This has got to be some kind of gag. Kurzweil is such a known kook that hiring him to work on AI is like hiring William Shatner to collaborate on your music project: the only possible way it could be legitimate is in an ironic sense.


Kurzweil did a lot of good work on AI a while back; he came by his fame legitimately, as an engineer and startup founder. If he had then kept his mouth shut about his predictions for the future, nobody would find his hiring at Google particularly surprising.

(Don't believe me? Google it.)


I hope he'll use some of those resources to upload existing intelligences such as, oh, me. Of course, then I might end up in Google Drive or maybe a low-rated app in the Google app store....


More proof that Google is SkyNet.


question for you: how does Kurzweil's role in Google compare to Regina Dugan's?


I think that the Kurzweil skeptics are focusing entirely on the wrong thing when they criticize Kurzweil's technical chops. He isn't being hired as an engineer; he's going to be Director of Engineering. If any of his ideas or proposals are fanciful, there are plenty of people, including Peter Norvig, to set him straight.

What is noteworthy about this hiring is that for the first time, a company with serious technical ability and resources is going to be tackling strong AI. No, there was nothing in the announcement that directly mentions strong AI but I contend that they will have to tackle at least certain aspects of strong AI in order to make significant progress in speech recognition. You can only get so far using the standard tricks that are used with traditional natural language understanding (NLU). At some point, the system is going to have to have an abstract model of the way the world works in order to mimic the assumptions that human intelligence requires in order to understand language.

This shouldn't be all that surprising given what they are trying to do with Google Now. Google Now is different from other products using speech recognition because it is active; it behaves more like an independent AI agent. Think about what Google will have to do to improve its performance; it will need to build a model the user's behavior in order to tune the probability distributions that underlie the best interpretations for what someone is searching for. You see, language understanding in humans sits upon a sizable foundation of innate assumptions about the way the world, including other humans, works. My claim is that Google have to duplicate much of this in order to get better technical performance from Google Now as well as search. It will need to have a prior knowledge about people in general and modify that knowledge over time by what it learns about a particular user. What I have just described is the missing component in efforts to develop strong AI -- the mechanism by which a distributed AI can learn to mimic aspects of human thinking via evolution. (I'm a functionalist, as you've probably gleaned already, so I would argue that the only way to improve NLU is to ground the processing in "real" understanding on some level.)

Now, Canonical could build a Google Now type system for their Ubuntu phones and seed the learning algorithms with an open source, wiki-AI type project -- you don't need Google to create such a system. But Google has the monetary incentive, resources, and now the technical vision (Kurzweil) to justify pursuing this on a large scale, just as they did with their search engine.

What was the primary thing that Apple did to reach their current level of success? They tried things that other people poo-pooed. It wasn't that the iPhone and the iPad were technically that amazing -- that's why people are unimpressed with the quality of the patents Apple has asserted -- it's that they were the first to seriously try a lot of the features that are now standard. As is often said around here, 90% of success is showing up. The company I work for directly competes with Google in areas related to Google Now and I tell you I got a little nervous when I heard they had hired Kurzweil. Again, not because of his technical abilities but because now we have to worry that Google is going to come up with something totally crazy out of left field. The fact that Kurzweil is crazy is precisely why I worry.


I know of a cool way he could test the unlimitedness of his resources.


... he could buy a Nexus 4 on Google Play and try to get support for it from a real live person?


I bought a Nexus 4 and phoned up phone support to ask about the shipping. I was immediately connected to a live person, I was told when it shipped and when I would expect it.

Love the phone, totally worth it.


Just being able to buy a nexus 4 on google play would be impressive on its own...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: