Well, of course. GPT-3 has no underlying model of meaning. It's just autocomplete with a bigger data set. Used on natural language, it produces text that looks reasonable for about three paragraphs. Then you realize it's just blithering and has nothing to communicate. (Like too many bloggers, but that's another issue.)