There is a possibility that everything we output is merely a regurgitation of thousands of human answers, ideas, and thoughts we have encountered before. Including this comment of yours. And mine.
There's this new trend of what I'll dub techno-nihilism, which is essentially a counterargument to the stochastic parrot argument. The former being: well what if WE are stochastic parrots, after all that's how we learn, right? Well yes, but actually no.
It's trivially false because ChatGPT was trained on something (in this case, Stack Overflow), which, in turn was trained on something else (maybe a book), and so on. So knowledge, imagination, and genuine creativity must exist somewehere down that chain. Everything can't be just repeating what was learned prior ad infinitum, or we'd have nothing new. Ironically, even the development of large language models is an exercise in creativity.
Oh, it's not an "interesting idea" that we're teetering on the edge of not being able to differentiate between machine-generated and human-generated content. It does seem that our AI learning models simply mimic our own thinking processes, absorbing and combining experiences to create results that sometimes outshine their origins.
The real question that'll soon dominate is, "How can we even tell the difference?"*
At the sametime I think you may be overvaluing new answers and the importance of reiterating known answers in a method the user asking the question understands.
For example, there is no intrinsic value in something new. If I take a new solution to a problem, lock it in a box, then it has zero value. It is not improving anyway.
Now, if I take a solution and present it to you in a manner that you can understand, that has an inherent value to the end user.
By this analogy attempting to say that LLMs are useless because they only know what already exists is far to harsh of measure because the vast volume of human output is rehashing what we already know.
I hate this sort of retort since it’s fundamentally meaningless. All atoms that will ever exist were in the singularity and exploded during the big bang to encompass all of the universe, and so we are all just moving along. Ok, so what.
The point of the retort is precisely to point out the meaninglessness when "regurgitation" is extended to include systems with some degree of generalization/extrapolation/discovery capabilities. The expectation set on AI seems to be to break information theory, "it's not original because it can only draw a horse (more accurately than a random guess) due to having trained on information about a horse" or so on.
A possibility that every idea necessary to put the James Webb Space Telescope in place and analyse the data it collects from the universe's earliest and most distant galaxies was already present thousands of years ago, in fact must have somehow been built in to the first humans? I don't think there is.
Which version are you currently using? I used to have the same setup back in the days when 1Password was only a standalone app, before it became a SaaS as it is now.
Can you please explain how you have organized your current setup?
This makes me ponder over how predictable we humans can be.
We tend to act similarly when placed in similar circumstances.
We think alike when presented with the same context.
We find patterns in things.
Etc, etc, etc.
And I'm sure I'm not the first person in this thread to have had this exact thought.
Will people coming in from Twitter and co specifically to check in on Pelosi's plane have any interest in converting to a subscriber for general flight tracking? I'd suspect the conversion rates would be (/are) extremely low with this kind of inorganic traffic.
It’s an interesting idea, and I have a feeling I’ve read somewhere about Google doing something similar, but these sorts of heuristics are a rabbit hole I’m trying not to go down at the moment—currently I have far more serious reasons why my ranking isn’t great, so I’m trying to prioritize those rather than getting distracted by whatever interesting algorithm I happen to bump into. (Definitely on my list to investigate in the future, though!)