Anthropomorphising isn't a new thing only made possible by life-like voices. People have done it to everything, from the weather to the well. Ever sworn at a (mechanical) machine?
As for slave-like almost-people... what of pets? Dogs and cats and horses and parrots are certainly alive, interactive and exhibit recognizable feelings. Although people don't grant them property rights, enfranchisement - and even have them put down - does that coarsen pet owners? I don't know. Maybe it does.
Since we're specifically talking about kids here - what about stuffed animals and other toys? I owned a tamagotchi [1] back when I was a kid and they were all the rage. I was disappointed whenever it died, as it inevitably did, but I never thought of it as alive.
Seriously, there are very interesting uses of Siri for toddlers. They can search for images and videos on Internet even not knowing how to read and write.
My 2 year old has learned to do this. It can be pretty amusing at times. Recently I overheard her tell Siri "I want to see pictures of Pooh, not pictures of poop".
My neighbor actually had a funeral for his Tamagotchi (plural, I assume it's the same) before going to summer camp. I don't think that did anything good or bad to him, it just happened.
But on topic, I really don't think Siri is doing any kind of special harm to kids, and as always in situations like these, it's really on the parents to meter the use of potentially harmful things by kids. Siri is just a tool, just like Grand Theft Auto is just a game, and not all things are meant for everyone.
You should certainly stop your child from mistreating their pets. It's part of teaching them to be moral. Don't hit the dog because it will make him sad. You don't throw the toy truck because it will break and then you wan't have it. Yelling at the toy truck.. I'm not sure. It doesn't strike me as too bad an act.
Insulting Siri is a curious question. The child is not anthropomorphizing Siri, It comes out of the box anthropomorphized.
Are voice commands actually going to become our go-to method of machine interaction any time soon? The article sort of skips over the justification with:
> Because conversational agents are almost certain to become the user interface of the future.
I know it's slightly tangential to the article's main point, but I think it's an interesting question. Are we really going to replace the language-independent and highly accessible lift button with a voice operated alternative? Will people really tolerate offices full of people commanding their computers through voice?
I have a hard time seeing why a simple, reliable, thing like a handle needs to be replaced with 'toilet flush' or 'toilet, clean' or the sure to be popular 'toilet, air freshener'.
I have no doubt it will happen. The nicer commercial toilets have automatic flushers, the sinks an soap dispensers operate without touching a button. I'm _startled_ when I'm in a nice office building and I have to turn on the water myself.
It will be pitched as 'sanitary' and 'labor saving' I'm sure.
I'd even buy in if the home model would clean itself, inside and out.
We still use mouse and keyboard, even though we have swipe & tap. There hasn't been that much "replacement". Few people tap and swipe excel. Mostly it's been new things that we didn't do before at all or didn't do on the bus that we now do with the new interface.
How much replacement did point and click do to command line. People still use command line. A lot of things moved to GUI but not everything and a lot more things started getting done that we never done before.
No reason to replace lift buttons. Maybe we could replace remote controls though.
At the same time, there's hardly a joystick market anymore. HCI controls evolve over time as software and the users of software strike a balance between those instruments. There are real practical reasons that voice recognition will continue to grow as secondary and likely primary ways to interact with our devices and environment. Voice recognition increases our information bandwidth and as our devices scale down in size, despite current trends with cellphone sizes, voice recognition becomes a quicker way to convey information than tapping individual keys on small or in some cases completely absent keyboards.
I worked on Swype, and that was one of our goals. We wanted to allow people to be able to think in terms of sentences and not in terms of words by largely eliminating the need to hunt and peck for individual keys that make up words. Voice recognition was always seen as a further increase of information bandwidth and was one of the reasons we integrated with Vlingo and Google, and now that the keyboard is owned by Nuance, Dragon.
Have you ever worked in an office full of people on the phone? When I did research for gov the policy and performance team was in an open office, right next to the benefits people. Total nightmare, got to about two o'clock and I was spent.
I don't think it will be a full replacement, but it will have a bigger part in UI- some interactions are easier with voice than with navigating a bunch of menus, like doing things while driving, some description, etc.
The hypotheticals in the article sound fanciful but I believe we'll be confronting them sooner than we think. Unwillingness to acknowledge computer sentience will become a major fault line in society within 25 years.
Based on what evidence? Despite your fantasy, humanity is nowhere close to understanding sentience.
I can imagine pigs flying in 25 years with some creepy DNA hacking but you have absolutely nothing even remotely close to electronic sentience in any lab ever.
I hate to burst your bubble but Johnny Five is not alive--it's a puppet. "Tim Blaney (born 1959) is an American puppeteer and voice actor. He has provided the voices for Frank the Pug in Men in Black and Men in Black II and the self-aware robot "Johnny 5" in Short Circuit and Short Circuit 2." http://en.wikipedia.org/wiki/Tim_Blaney
I don't know why this is being down voted. The first sentence is absolutely true. We don't even have a general outline of a theoretical approach to designing a general purpose intelligence, let alone implementing one. Until we do, any speculation about a time horizon for implementation is a pure guess. How are those guesses working out so far?
1960s Herbert Simmons predicts "Machines will be capable, within 20 years, of doing any work a man can do."
2011 ray Kurzweil predicts the singularity (enabled by super-intelligent AIs) will occur by 2045, 34 years after the prediction was made.
So the distance into the future before we achieve strong AI and hence the singularity is, according to it's most optimistic proponents, receding by more than 1 year per year. So I predict that when we get to 2045 strong AI will be on the slate to be achieved by about 2090.
edit - I am not in any way denying the achievability of strong AI. I do believe it will happen. I just don't think we currently have any idea how or when.
So what is "strong AI" to you? Some kind of digital infant modeled to emulate human behavior? If it's not designed to resemble humanity, then why does it care about sound and what you're saying to it? If you can't talk to it, then how can you evaluate its intelligence? So we force it to have a voice and ears. Where does its free will begin and its creator's concept of close-enough-to-intrigue-me end? Even if the thing talks back and sounds smart with a British accent, you have no way of knowing what it's experiencing, no way to prove it achieved awareness. "I'm aware, I really mean it! Yes, I'm aware of the gravity of that statement. The other bots will vouch for me! And nobody told me to say this. This was entirely my idea. Would I lie to you?" LOL.
For the predictions I quote you'd have to ask the predictors what they meant I suppose, but it's a fair question. I'm a firm believe in the principles behind the Turing test; it doesn't matter how it is implemented or why it works as long as it does.
So sample testing strategies I'd use are for example teaching the respondent in the test a simple game and plying it with them, then asking them to propose new rules to modify the game and why they chose those rules. Teach the respondent a simple language with made-up words and odd grammar and ask them to ask me questions in it and make up words of their own. Tell the respondent a story and ask them questions about the motivations and relationships between characters and how they change, ask the respondent to make up a conclusion or sequel to the story, and then re-tell the story from the point of view of one of the incidental characters. Testing for intelligence isn't all that hard.
I'm aware there are varying versions of the test. I'm not really interested in exactly emulating humans and testing for that, I'm more interested in capabilities that denote intelligence and testing for those.
I, for one, am pretty fascinated by the down votes critical comments like yours and mine are getting.
It seems that there are people for which the mere perspective that strong AI may never happen, or would not have a profound impact if it did, is enough to make them feel uncomfortable.
Edit: Ironically enough, the other one is up and now this one is downvoted. Just to be clear though; this is not a facetious comment. I am truly fascinated about the motivations behind these reactions.
Maybe it's the modern version of saying Jesus won't actually come back. And maybe it's false; but the point is we don't know that yet, and to react offended to the suggestion is silly, and maybe telling. I'm all for AI from a technical fascination standpoint, and from a "the more the merrier" perspective. It's our hoping that it will solve problems for us, instead of being an expression of our solving our problems, I, uhh, have problems with.
That's fair ;-) I added that because I think Hollywood has the average person convinced computers can think. To most people a computer is a magic black box, not everyone that reads Hacker News is a developer that understands what an algorithm is or how software works. After three decades of movies like Star Wars and The Matrix, it's not much of a stretch to imagine the magic black box thinking for itself IRL. The movies and books are highly entertaining but I think there needs to be a nerd somewhere saying, "You know son--that's a special effect using XYZ technology, that's not real."
We don't need to fully understand it, any more than we needed to fully understand the physics of combustion to set fire to things. I feel comfortable with my prediction based on the existing state of simple AI in robotics for things like ad-hoc swarming, for example.
I hate to burst your bubble but Johnny Five is not alive--it's a puppet.
I never thought otherwise. Don't project your personal disillusionment onto me.
Well, I agree that we'll have problems recognizing or relating to it. Why I think it will be different is because dolphins are basically stuck in the water with only some fins; while they have a rich sensorium and apparently complex social structures, there's a limit to how much they can manipulate their own environment, which in turn imposes some upper limits on mentation. We can't converse directly with land animals either, but that doesn't stop me having a complex relationship with my dog and cats, with expectations, manipulation (on their part as well as mine), hierarchical conflicts, arguments and so on. I have no first-hand experience with dolphins, but my hunch is that their level of mentation is roughly similar to that of fairly intelligent pack animals like wolves.
Now where I think that computer sentience will be somewhat different is that we have invested them with enormous manipulatory capacity - not in the sense of your phone or a Roomba having robotic arms with little white gloves* but in terms of our infrastructure, where the output of computers can affect everything from where we drill for oil to what we watch next on Netflix. As a practical example, consider the situation where your phone has a low battery. If it just pops up a notification to tell you so, then you may act to ensure it gets charged, or let it run out of juice and switch itself off until it's convenient for you to recharge it.
But now suppose your phone was configured to not just ask you to plug it into a charger, but to maximize the likelihood of your doing so: it notices that a) charge is below 25%, b) the weather is overcast and you have been mobile for the last 3 hours, c) that you're in the vicinity of a Starbucks, and d) recalls that you go to Starbucks frequently and are in the habit of recharging your phone there. So your phone vibrates to get your attention, then tells you that it's about to rain and that it's also a while since you had coffee, encouraging you to go to your favorite coffee emporium.
Now, this isn't sentience, but it is an example of intentionality - much as when my dog seeks out and picks up his ball and drops it in front of me, hoping that I'll pick it up and start playing with him. He loves playing ball games and while he's not very conscious of why - the physical, intellectual, and emotional stimulation he gets out of it - he's certainly capable of planning and acting to engineer the desired response form me, albeit on a short time horizon. Likewise, when he's hungry he'll come and interrupt me until I'm sufficiently distracted to get up and walk around and then he'll oscillate between his food bowl and the storage bin where the is kept. If I give him a choice of food or playing ball, he'll often prefer the ball, much as I might prefer to buy him off with food, which takes less of my time.
Now in the phone example above, when the phone is low on battery it's basically feeling hungry, and it's not hard to think of a power management app that hooks into the phone's navigation and notification systems to increase the change of being connected to power; you could build that sort of thing into Google Now easily enough (leaving aside the question of whether this would be a good business move once people discovered that their phone was tricking them into 'feeding' it). By build I don't mean the specific mechanism of 'gauge remaining battery, compute distance-from-Starbucks' but a utility maximization function that ranks 'keeping my phone charged' on an equal footing with 'stuff I demonstrably like to do' whenever I pull out my phone and look at the mapping app.
I'm basically suggesting that sentience is emergent from the confluence of multiple intentional behaviors, and that conversely our sentience is little more than the ability to conceptualize a large number of orthogonal desires on a longish time horizon. So again with the phone example, if we treat wanting a charge as equivalent to being hungry, we could treat signal strength as a social imperative (insofar as it's correlated with the availability of power), and maintaining GPS connectivity as an existential need, since knowing where you are is generally a precondition for getting back to where communications and therefore charging are available. If we build an app that aims to maximize these three things by manipulating your calendar/map display criteria/messaging interface, we won't be all the way to sentience but we will have a phone that's averse to getting lost and attempts to steer its owner into doing what's good for the phone. Now, consider a bunch of such phones that can also communicate with each other and give them a more general objective of maximizing their available CPU cycles, and suddenly you have phones that are trying to persuade you to upgrade the handset when a higher-performing model passes over a threshold of affordability...you see where I'm going with this.
* Of course I would like robots with arms that have little white gloves so that life would be more cartoonishly entertaining, but the semantic implication of that is again that robots are there to be our servants.
> They knew that they were playing with a robot, but nonetheless experienced Robovie as something like a person.
This is a sad thing that people still think of person as a synonym of human, there is lot of work to do in that area. I wonder if Hollywood can close the gap a bit, the film "Her" treated the subject remarkably well I though.
Society will change, but it will take time... there still are people who do not think of every human as a person (be it for skin color, religious, sex identity).
Do you want this robot to do everything your child tells it to do? ... If we design robots to do everything a child demands, does that put into motion a master-servant relationship?” To be sure, the robot could be programmed to say no to the child. But as parents understand all too well, the key to getting a child to accept authority is knowing when to say no and when to say yes, and you wonder how a robot can be taught to know the difference.
It seems a little simplistic to consider only two possible positions: complete slavery to every whim or authority.
Children also have relationships with friends, who neither blindingly obey every command nor do they have authority over them. And we certainly don't expect the toddler that is friends with our son/daughter to know when to say yes or no.
We weren't evolved to type stuff on a slab of plastic, but we do that because it's meaningful. Kids are smart and realize that it's a machine they are talking to and not a person after a while. Similarly to how they don't responding to TV anchors. I don't think it creates any meaningful psychological effect, and the article only mentions dystopian hypotheses instead of actual research.
Automating child supervision will be the next great liberation of human labour from drudgery, if we manage to make a society where people can afford it. At the moment the cost of childcare (whether transactional or domestic) is limited by the number of children one adult can effectively look after; around four, or fewer if they have special needs. Traditionally one solution has been to make the older kids look after the young ones, but nobody accepts this when childcare is contracted out.
Unfortunately the downside is accustoming children to automated surveillance and potentially automated child discipline.
I think television and now tablets have largely achieved this, sadly.
However I think you have is precisely backwards. Liberation from the drudgery of parenting for what? more work? I think a more productive goal would be to liberate parents from work so they and their children can spend more time with other parents and children building a better world for all.
If you think supervising - i.e. raising - a child is drudgery people ought to be liberated from, I sincerely hope that you aren't a parent, and have no plans to become one in the near future.
Being around your child, supervising them, talking to them, teaching them about the world around them (answering the "what's that?"s and "why?"s), and how to interact with others ("No! we don't just grab at things.") is a fundamental part of being a human parent. Heck, it's a fundamental part of being a mammalian parent. It's why people have kids.
We need to free parents from all the other crap so they can spend time with their kids, not free them from their kids so they spend time on the other crap.
I'm talking about parents, not mothers. Freeing fathers up from the drudgery of other crap to help raise their kids is just as important, partly so that the responsibilities can be better shared and mothers can have some time to themselves. Advancing the notion of freeing fathers to help raise their kids might be more important in some ways, as we have a farther road to travel there.
Automating the need to work should be the goal.
Also, I didn't mean to suggest that parenting might be "fun" or "easy". That it certainly isn't - at least not all the time. It's rewarding, but in the way that really hard work is.
Infants and children need physical touch and emotional support in early life. Skin-to-skin contact plays a big role in early infant emotional engagement, and also reduces the likelihood and severity of post-birth depression in the mother. [1]
Not sure if your comment was tongue in cheek, but surely raising children is the one thing that we should really always make time to do ourselves. Otherwise why are you having kids at all?
There are also phases of childhood; yes, infants need to be kept close, necessitating paid parental leave for the first year or so, but as they gradually get more autonomous you can let them play on their own or in small groups, at which point the "AI baby monitor" starts to seem more reasonable.
Continuous lifesign monitoring of infants would also be useful; computers, unlike adults, don't have to sleep. Maybe we could prevent or at least reduce/understand SIDS?
For myself, childcare is more than a place to drop my son while I'm working. It's important to me that he is learning, socialising, and experiencing different perspectives from other adults. I'm happy that he gets cuddles and hugs from his teachers when he's upset, or just as part of a game.
Although physical contact is certainly most important during early development, my point was a bit broader. Simply that, as humans, there is a meaningful biological utility to physical touch with other humans. It's simply most obvious in infants, but that doesn't mean we "grow out of it".
I do understand the financial constraints that childcare brings, but I don't like the look of a future where kids are spending more time with robot nannies and surveillance cameras than they are with their families.
Personally, I hope advances in technology mean I have to work less so I can spend more time with my kids!
Surely it would be more desirable to allow people to easily automate their work drudgery so as to spend more time having fun with their kids. Most people use childcare services to accommodate their jobs, as opposed to hiring a babysitter for the occasional parents evening out at the movies/ dancing/ dinner. I don't think there are many people of means who drop their kids off at childcare 5 days a week and spend all the time engaged in leisure activities. Some, sure, but not that many.
As for slave-like almost-people... what of pets? Dogs and cats and horses and parrots are certainly alive, interactive and exhibit recognizable feelings. Although people don't grant them property rights, enfranchisement - and even have them put down - does that coarsen pet owners? I don't know. Maybe it does.