Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

ChatGPT is already pretty good at "admitting" it's wrong when it's given the actual facts, so it does seem likely that providing it with a way to e.g. look up trusted sources and ask it to take those sources into consideration might improve things.


I think that helps with "hallucination" but less so with "factuality" (when re-reading the parent discussions, I see the convo swerved a bit between those two, so I think that'll be an increasingly important distinction in the future).

Confirming it's output against a (potentially wrong) source helps the former but not the latter.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: