Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was in the middle of typing the same question. This is the part that worries me about Generative AI; far too many people seem to have forgotten that its prone to confabulation and telling the user what they want to hear.


Sure, but if the LLM tells you the jump from step 2 to 3 in a calculus problem is the use of l'hopital's rule, you should be able to figure out pretty quickly if it's a red herring or not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: