Google is in the best position of anyone/any company to cause an AI disaster.
I imagine the CIA and/or defense contractors are in a better position- they have a ton of money, they work in AI... oh, and they also tend to equip their projects with a ton of destructive weaponry. Which gets deployed in foreign countries, sometimes without permission.
> Google is in the best position of anyone/any company to cause an AI disaster.
Are you serious ? Let's not overstate Google's position in this world.
They have a search engine, mobile platform, advertising platform and a bunch of popular websites. That's it. All of which have been done before for a decade now. Nothing they do directly influences whether people live or die.
I would be far more concerned about something like IBM Watson used in health care situations.
> They have a search engine, mobile platform, advertising platform and a bunch of popular websites. That's it. All of which have been done before for a decade now.
Also best and brightest of engineers and AI researchers, shitload of money, culture of ambition and a clear intent to go after AI tech. I haven't heard of any other company like that.
> I would be far more concerned about something like IBM Watson used in health care situations.
I'd actually welcome it with open hands. It's about time for some automated diagnosis.
EDIT I'm sorry but I do have to address this point.
> Nothing they do directly influences whether people live or die.
Shut down Google and see what happens. Huge sectors of economy depend directly on search ability they provide, not to mention how many people are now using GMail. Moreover: do you know the addresses of nearest hospitals? Phones to medical specialists? How do you navigate around places? Without Google we'd be all back to Yellow Pages.
If there's one company humanity really grew dependent on, it's Google. Yes, you could probably replace most of the services if needed, given enough time, but the fact is they're the best out there right now, we're all using them and they definitely influence our lives and deaths.
>Also best and brightest of engineers and AI researchers, shitload of money, culture of ambition and a clear intent to go after AI tech. I haven't heard of any other company like that.
IBM ? Who also have been doing it for decades and are far ahead of Google from all indictions.
>I'd actually welcome it with open hands. It's about time for some automated diagnosis.
My point was that IBM Watson if it made wrong decisions could directly influence health care outcomes i.e. life or death. Nothing Google does is comparable.
> Shut down Google and see what happens. Huge sectors of economy depend directly on search ability they provide, not to mention how many people are now using GMail.
Google is not the first or last search or email company. We would simply switch to Bing and Yahoo and the world moves on. Or have you never heard of Altavista, Excite, Lycos ?
> Without Google we'd be all back to Yellow Pages.
Hilarious you mention that since that's where Google gets its worldwide Local/Places search content from. So in fact we are already using Yellow Pages.
>If there's one company humanity really grew dependent on, it's Google.
Humanity doesn't depend on Google. Get a grip would you.
I would argue the risks involved in creating a "real" AI are present regardless. Not to re-link, but this example is even more apropos to your question than my other answer in this post.
AI is just smarter software at the end of the day. The risks are still far greater for a simple control system used in a Boeing 777 or heart pacemaker than for an AI search engine.
It's all about Risk Management 101: Risk = Impact x Probability.
Right, when the impact is calamity, and the probability is high, risk is astronomical. Which is why I'm bothering to make this point in the first place. See the other arguments in this thread as to why, with an intelligent AI, what it starts out controlling is unlikely to have any bearing on what it is capable of gaining control of... In risk management terms this is about as serious as it gets.
That Artificial Stupidity is more deadly than AI. (Perhaps this is because I don't see AI as an existential risk, unless it is used toward bad ends such as in war.)
We're not perfect as humans, but I don't think we're stupid enough to build something we can't control. We don't even know what consciousness is, much less whether artificial consciousness is possible.
I do know that it would save lives to liberate humanity from drudgery and stupidity.
I haven't seen any reasons around why an AI must be conscious to cause harm or existential risk. My opinion is that an x-risk causing AI would almost certainly not be conscious like humans.
As for human stupidity, maybe you just have more faith in humans than I do (but I am not entirely pessimistic). Let me ask a related question, to try and gauge this more deeply: do you think anyone will ever release an engineered pathogen into the wild?