* Insincerity. I would prefer a disclaimer that you posted AI generated content than presenting it as your own words.
* Imbalance of efforts. If you didn't take the effort to write to me with your own words then you are robbing me of my efforts of reading what your AI assistant wrote.
* We have access to the same AI assistants. Don't try to sell me your AI assistants "insights" as your own insights. I can interact with the same AI assistant to gather the same insights.
Notice that the quality of the AI output is mostly irrelevant for these points. If you have good quality AI outputs then you are still welcome to share it with me, given that you are upfront that it is AI generated.
> We have access to the same AI assistants. Don't try to sell me your AI assistants "insights" as your own insights. I can interact with the same AI assistant to gather the same insights.
With AI, the insights (or "insights") depend on what questions you ask, and what you know to push back on. We all have access to the same IDEs, but we don't write the same code.
I don't think AI output on some factual topic is comparable to distinct things written with IDEs.
On a given topic, I have always found that AI comes to the point of the average talking points of that topic and you really can't cleverly get more out of it because that's all that it "knows" (ie, push back gets either variations on a theme or hallucinations). And this is logical a given method is "average expected reply".
Genericness is overwhelmingly a product of RLHF rather than an innate property of LLMs. A lot of manual fine-tuning has gone into ChatGPT and Gemini to make them capable of churning out homework and marketing blogs without ever saying anything offensive.
If you make requests to the Sonnet 3.5 or DeepSeek-R1 APIs and turn up the temperature a little bit, you will get radically more interesting outputs.
Isn’t that still pulling from the same distribution with a larger standard deviation? I think the problem here is that it only covers a small part of the search space. I think the problem here is that generators are not using novel distributions. They’re still sampling from the same population (existing written works).
> With AI, the insights (or "insights") depend on what questions you ask
Which is an interesting place to put the human. You can be fooled to think that your question was unique and special just because it led some blackbox to generate slop that looks like it has insights.
This explains why we have people proudly coming in and posting the output they got their favorite blackbox to generate.
> With AI, the insights (or "insights") depend on what questions you ask, and what you know to push back on. We all have access to the same IDEs, but we don't write the same code.
This is also true if you don't have "AI" but are simply reading sources yourself.
Is AI going to help you realize you need to push back on something you wouldn't have pushed back on without it?
Claude makes a genuine effort to encourage the user to push back. The reason for this becomes apparent when you look at the system prompts:
"Claude is intellectually curious. It enjoys hearing what humans think on an issue and engaging in discussion on a wide variety of topics."
"Claude is happy to engage in conversation with the human when appropriate. Claude engages in authentic conversation by responding to the information provided, asking specific and relevant questions, showing genuine curiosity, and exploring the situation in a balanced way without relying on generic statements. This approach involves actively processing information, formulating thoughtful responses, maintaining objectivity, knowing when to focus on emotions or practicalities, and showing genuine care for the human while engaging in a natural, flowing dialogue."
Not knowing exactly how/what the other person asked their AI is one of the reasons I downvoted all AI slop, even those disclosed as AI generated. Asking in different ways can often generate radically different answers. Even if the prompt is disclosed, how do I know that was the real prompt? I would have to go interrogate the AI myself to see if I get something similar, as well as formulate my own prompts from different angles to see how much the answers change. And if I have to put in all that effort myself, then what is the value of the original slop post?
> With AI, the insights (or "insights") depend on what questions you ask, and what you know to push back on. We all have access to the same IDEs, but we don't write the same code
Yeah no, an AI is not gonna give you a brilliant answer cause you wrote such a brilliant prompt, you just wrote a different question and got a different answer. Like if I type something into google I don’t get the same result as when you type something into google, why? cause we’re not asking the same damn questions.
While I also agree with the sentiment that it's not the same, I think it's interesting that you use "googling" as a comparison.
Googling and extracting the right information efficiently is clearly a skill, and people do use it in wildly (and often inefficient/bad) ways. That might be less of an issue with your average HN user, but in the real world, people are bad at using Google.
* Insincerity. I would prefer a disclaimer that you posted AI generated content than presenting it as your own words.
* Imbalance of efforts. If you didn't take the effort to write to me with your own words then you are robbing me of my efforts of reading what your AI assistant wrote.
* We have access to the same AI assistants. Don't try to sell me your AI assistants "insights" as your own insights. I can interact with the same AI assistant to gather the same insights.
Notice that the quality of the AI output is mostly irrelevant for these points. If you have good quality AI outputs then you are still welcome to share it with me, given that you are upfront that it is AI generated.