Hacker News new | past | comments | ask | show | jobs | submit login

We really need to work on popularizing better, non-anthropomorphic terms for LLMs, as they don’t really have “thoughts” the way people think. Such terms make people more susceptible to magical thinking.





Very much in support of this. The use of anthropmorphic or even biological terms are entirely misguided. All they do is drive a narrative that is very much belitting natural intelligence.

Could you argue why they don't? And could you also argue why we do?

When a car moves over the ground, we do not call that running, we call that driving as to not confuse the mechanism of the output.

Both running and driving are moving over the ground but with entirely different mechanisms.

I imagine saying the LLM has thoughts is like pretending the car has wheels for legs and is running over the ground. It is not completely wrong but misleading and imprecise.


Planes fly, birds fly. They use related, but ultimately quite different mechanisms to do so. Yet we call both flying.

To fly means "to soar through air; move through the air with wings" (etymonline)

That is pretty much an accurate discription of what planes and birds do.

To plan means "to reason with intent".

That is very much not what LLMs do, and the paper does not provide evidence to the contrary. Yet it uses the term to give credence to it's rather speculative interpretation of observed correlation as causation.

Interestingly enough there is no definition of the term, which at least would help to understand what the authors actually mean.

I would be more inclined to take a more positive stance to the paper if it used more appropriate terms, such as call observed correlations just that. Granted that would possibly make for much less of a fancy title.


Yes. Simply, and well put.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: