Hacker News new | past | comments | ask | show | jobs | submit login

I think LLMs have revealed that we have at least two ways of thinking.

One is analogous to how LLMs operate and is essentially next token prediction. Think of your mental state when you’re very engrossed in conversation. It’s not a truly conscious act. Sometimes you even surprise yourself. If you’re a child, or childish, you may just confabulate things in the moment, maybe without realizing that you’re doing it in the moment.

Now think of some very difficult problem you’ve had to solve. It’s not the same, right? It’s a very conscious act, directing your focus here and there, trying to reason on how everything fits together and what you might change to fix your problem. Odds are good that you’re not even using language to model the problem in your head.

LLMs are doing the first thing, and are exceptionally good at it even in this early stage. The surprising thing to me is how far this can get you. If you have an inhuman level of knowledge to work from then in conversation mode you can actually solve some moderately difficult problems.

I think that maps to our own experiences as well. For the things that you have deep knowledge on you will sometimes find yourself solving a problem just by constructing sentences.




nice analogy between the output of llms with stream of thought conversation - wherein statements are made off the cuff and possibly confabulated. as opposed to correlating statements with knowledge structures

knowledge structures as we construct it are symbolic, with symbols representing abstractions (ie classes), along with relations between these symbols. human ingenuity consists of coming up with new symbols or new relations between existing symbols (which is a process of abduction) based on new perceptual inputs (either our senses or instruments). Such knowledge structures are powerful because, they allow us to build giant towers based on solid foundations.

> For the things that you have deep knowledge on you > will sometimes find yourself solving a problem > just by constructing sentences.

This is another way of saying that you have clarity in that subject, and so your stream of thought aligns with knowledge structures - which is another to say that you really understand something. However, in my experience very few people are able to stay within their lanes (competence), and most of us tend to babble on topics we really dont have knowledge structures for. also, few people have the self-awareness of what they really have knowledge structures for (ie, know what they dont know).


This is the idea of the book "Thinking, Fast and Slow" by Daniel Kahneman. LLMs follow what he describes as System 1, the intuitive heuristic that gets you through most of the day, versus System 2, the rigorous algorithmic thinking that you reserve for harder situations


> For the things that you have deep knowledge on you will sometimes find yourself solving a problem just by constructing sentences.

That's what rubber duck debugging is, no?

LLMs are rubber ducks that can talk back, for better or worse.


> The surprising thing to me is how far this can get you.

There was that study a while ago that triggered the NPC meme. The common interpretation was that most people only think in the first way you described.


I dug around to find the study you're referring to (after a now-deleted comment asked about it). Is it this one?

https://hurlburt.faculty.unlv.edu/heavey-hurlburt-2008.pdf

If so, the interpretation you describe seems pretty far off base.

Also, my impression is that that meme was not sparked by a psychological study, but that a few people did draw on this study to justify it.


Possibly, but that's further back than I thought it was. The trend I'm referring to started somewhere around when this was posted: https://www.cbc.ca/news/canada/saskatchewan/inner-monologue-...

And I thought what kicked it off was from only a few years before at most.


> The common interpretation was that most people only think in the first way you described.

Haha, from my own experience, this actually rings true for some “simple” people I know.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: