"Makes stuff up." And it's us, the users, who have to realize this. I mean, I wouldn't blame OpenAI for this, at least not at this point, and the company will have to live with it, look how it can turn it into something useful instead, since there's no one to complain to.
Now, humans could very well also be statistical inference machines. But they have way more tricks up their semantic-level understanding sleeves than ChatGPT circa 2023.
Markov chains are great for modeling human language and human decision making. ChatGPT demonstrates this and the results are not trivial. I don't see it being glorified beyond what it plainly does