It seems that the only barrier between brain state and thought contents is a proper measurement tool and decoder, no?
We can already do this at an extremely basic level, mapping brain states to thoughts. The paraplegic patient using their thoughts to move the mouse cursor or the neuroscientist mapping stress to brain patterns.
If I am understanding your position correctly, it seems that the differentiation between thoughts and brain states is a practical problem not a fundamental one. Ironically, LLMs have a very similar problem with it being very difficult to correlate model states with model outputs. [1]
There is undoubtedly correlation between neurological state and thought content. But they are not the same thing. Even if, theoretically, one could map them perfectly (which I doubt is possible but it doesn't affect my point), they would remain entirely different things.
The thought that "2+2=4", or the thought "tiger", are not the same thing as the brain states that makes them up. A tiger, or the thought of a tiger, is different from the neurological state of a brain that is thinking about a tiger. And as stated before, we can't say that "2+2=4" is correct by referring to the brain state associated with it. We need to refer to the thought itself to do this. It is not a practical problem of mapping; it is that brain states and thoughts are two entirely different things, however much they may correlate, and whatever causal links may exist between them.
This is not the case for LLMs. Whatever problems we may have in recording the state of the CPUs/GPUs are entirely practical. There is no 'thought' in an LLM, just a state (or plurality of states). An LLM can't think about a tiger. It can only switch on LEDs on a screen in such a way that we associate the image/word with a tiger.
> The thought that "2+2=4", or the thought "tiger", are not the same thing as the brain states that makes them up.
Asserted without evidence. Yes, this does represent a long and occasionally distinguished line of thinking in cognitive science/philosophy of mind, but it is certainly not the only one, and some of the others categorically refute this.
Is it your contention that a tiger may be the same thing as a brain state?
It would seem to me that any coherent philosophy of mind must accept their being different as a datum; or conversely, any that implied their not being different would have to be false.
EDIT: my position has been held -- even taken as axiomatic -- by the vast majority of philosophers, from the pre-Socratics onwards, and into the 20th century. So it's not some idiosyncratic minority position.
No. One is paint on canvas, and the other is part of a causal chain that makes LEDs light up in a certain way. Neither the painting nor the computer have thoughts about a tiger in the way we do. It is the human mind that makes the link between picture and real tiger (whether on canvas or on a screen).
We can already do this at an extremely basic level, mapping brain states to thoughts. The paraplegic patient using their thoughts to move the mouse cursor or the neuroscientist mapping stress to brain patterns.
If I am understanding your position correctly, it seems that the differentiation between thoughts and brain states is a practical problem not a fundamental one. Ironically, LLMs have a very similar problem with it being very difficult to correlate model states with model outputs. [1]
[1]https://www.anthropic.com/research/mapping-mind-language-mod...