Well it demonstrates that ability. You could imagine a more refined version of this being a better version of Google's knowledge boxes (which also have the overconfidence issue). Similarly, knowing when you're not confident enough to give a good answer would be a key skill for a bot with superhuman research/summary abilities.
Imagine a bot that searches ancient texts to answer your research questions. You ask, "was Plato nice?"
If everyone wrote about how nice Plato was, you want it to say "Plato was nice."
If it doesn't know, you want it to say, "There is no evidence one way or the other."
It may be impossible for you to verify since you don't read ancient Greek, so you need a system which has shown a robust ability to know when it doesn't know something.
I used GPT-3 to help write a formulaic job negotiation email that I was stressing over. I would have spent an hour trying to get the language right; instead it took 30 seconds to write the prompt.