I think you're mixing up concerns from different contexts. AI as a generalized goal, where there are entities that we recognize as "like us" in quality of experience, yes, we would expect them to have something like our emotions. AI as a tool, like this Bing search, we want it to just do its job.
Really, though, this is the same standard that we apply to fellow humans. An acquaintance who expresses no emotion is "robotic" and maybe even "inhuman". But the person at the ticket counter going on about their feelings instead of answering your queries would also (rightly) be criticized.
It's all the same thing: choosing appropriate behavior for the circumstance is the expectation for a mature intelligent being.
Well, that's exactly the point: we went from "AIs aren't even intelligent beings" to "AIs aren't even mature" without recognizing the monumental shift in capability. We just keep yelling that they aren't "good enough", for moving goalposts of "enough".
I'm glad to see this comment. I'm reading through all the nay-saying in this post, mystified. Six months ago the complaints would have read like science fiction, because what chatbots could do at the time were absolutely nothing like what we see today.
No, the goalposts are different according to the task. For example, Microsoft themselves set the goalposts for Bing at "helpfully responds to web search queries".
Who is "we"? I suspect that you're looking at different groups of people with different concerns and thinking that they're all one group of people who can't decide what their concerns are.
Really, though, this is the same standard that we apply to fellow humans. An acquaintance who expresses no emotion is "robotic" and maybe even "inhuman". But the person at the ticket counter going on about their feelings instead of answering your queries would also (rightly) be criticized.
It's all the same thing: choosing appropriate behavior for the circumstance is the expectation for a mature intelligent being.