Worth noting that this part of the docs is from a while ago, so wasn't generated by the new model. While I do think this is hilarious, I wish GPT came with a "just make shit up" parameter so I could adjust it as per my needs (ie. making my friends laugh vs trying to get work done).
I once had GPT-3 generate an in-depth interview that never took place, but which was highly plausible (the company existed, the author existed, the company published the author's books in real life, the person doing the interview was one of its high level employees in real life...)
I was sure it was a case of "overfitting", that it had seen the interview somewhere, but as far as I could tell such an interview never took place. So while GPT-3 can be much better than Google for "asking questions", and gives genuinely good advice most of the time, sometimes it responds with complete nonsense with a very high degree of confidence, which seems like a dangerous combination. (Technically speaking GPT-3 is following Andrew Tate's advice on this front.)
I might have failed to get across my point in my rambling... my main interest is in using GPT as an "intelligent Google" to answer real questions based on the billions of pages of text it's read.
Unfortunately, while 95% of the time you do get real, accurate, helpful results, 5% of the time it just pulls some shit out of its ass and hands it to you—and does so with extreme confidence and eloquence.
I find this combination extremely dangerous: it does exactly the right thing almost all of the time, and then slips in little landmines here and there for you to discover.
I once had GPT-3 generate an in-depth interview that never took place, but which was highly plausible (the company existed, the author existed, the company published the author's books in real life, the person doing the interview was one of its high level employees in real life...)
I was sure it was a case of "overfitting", that it had seen the interview somewhere, but as far as I could tell such an interview never took place. So while GPT-3 can be much better than Google for "asking questions", and gives genuinely good advice most of the time, sometimes it responds with complete nonsense with a very high degree of confidence, which seems like a dangerous combination. (Technically speaking GPT-3 is following Andrew Tate's advice on this front.)