Hacker News new | past | comments | ask | show | jobs | submit login

> What is stopping the LLM from fooling me in every field that I don't know anything about?

What's stopping any source from doing this? In the case of encyclopedias or news websites, I guess you could say reputation. But that's hardly reliable either.

So I guess that's my big pushback in regards to correctness complaints:

1) "Official" sources have always lied, or at least bent the truth, or at least pushed their subjective viewpoint. It has always been the readers job to think critically and cross-reference. On this point, LLM don't fundementally change anything, they just bring this long-running tension closer to our attention.

2) "LLM are inaccurate". Ok, inaccurate compared to what? Academic journals (see the reproducibility crisis), encylopedias (they tend to be accurate by nature of leaving out contentious facts), journalists (big laugh from me)?

I think outright hallucinations are a valid concern to bring up, but I would refer to my point about cross-referencing. However, I'd still point out that oftentimes a person might have read a perfectly factually accurate encyclopedia article but remembered hallucinations via motivated reasoning. Is the end result (what the person thinks and remembers) really different between LLM and traditional sources? This seems like more of a human problem than a LLM problem.

Do we have any factual evidance showing that people learning via LLM+traditional methods are actually less informed than people who learn from traditional methods alone? Right now, there's a lot of fear mongering and charged rhetoric, and not a lot of facts and studies. The burden of proof needs to fall on the people who want to regulate and restrict access to these models.

Finally, what I think this is really about is the continued transfer from the "industrial age" to the "information age". 100 years ago, your average person wasn't expected or really even able to "do their own research", and instead relied on top-down elite driven institutions to disseminate their version of the information. De-industrialization, then the internet, then social media, and now LLM are cracking this (now) outdated social order, and our old elites are understandably threatened.

I think this is another reason why we need to make this debate more rigorous and fact based: are LLM actually dangerous or is this just an example of elite preference?




LLM's hallucinate so often when I ask them about software development issues that I don't even bother to ask them about areas I'm unfamiliar with.

I think the major difference that makes in not just "elite preference" is that the LLM output is just confidently wrong. If I read a report from an elite institution I'm at least reasonably confident that they're not going to base their entire argument on the premise that 2+2=5. Someone is going to call them on it, there may be reputational damage, there may even be legal repercussions in certain cases.

LLMs have no such protections. GPT recently, when asked to "write a function using the ELixir programming language that..." wrote a function in Elixir syntax using Python libraries and function names. That's a class of error that makes it actually dangerous if you're asking it about anything you can't fully check on your own, and there's no checks-and-balances for the content it generates that's only ever visible to a single user.

I agree with you that "official" sources can't be trusted and there are inaccuracies everywhere, but you have to admit that the craziest sources who say the craziest things (flat-earthers and such for example) get pretty easily dismissed and everything else they say becomes suspect. LLM's bypass that and can say the most outrageous things without ever getting caught or being forced to make a retraction/correction. That feels like a problem worth worrying about.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: