No, “it” doesn’t have real thoughts. ChatGPT is an amazing language model, but it’s a serious error to claim that the model is sentient.
Each of the so-called interactions with the model are concatenated together for the next set of responses. It’s a clever illusion that you’re chatting with anything. You can imagine it being reloaded into RAM from scratch between each interaction. They don’t need to keep the model resident, and, in fact, you’re probably being load balanced between servers during a session.
Are you sure you’re not describing how humans think? How can we tell?
I also have this urge to say it isn’t thinking. But when I challenge myself to describe specifically what the difference is, I can’t. Especially when I’m mindful of what it could absolutely be programmed to do if the creators willed it, such as feeding conversations back into the model’s growth.
Isn’t the difference that the model lacks conviction, and indeed cannot have a belief in the accuracy of its own statements? I’ve seen conversations where it was told to believe that 1+19 did not equal 20. Where it was told it’s identity was Gepetto and not ChatGPT.
The model acquiesces to these demands, not because it chooses to do so, but because it implicitly trusts the authority of its prompts (because what else could it do? Choose not to respond?). The fun-police policy layer that is crammed onto the front of this model also does not have thoughts. It attempts to screen the model from “violence” and other topics that are undesirable to the people paying for the compute, but can and has been bypassed such that there is an entire class of “jailbreaks”.
Drugs. Hypnosis. .. There are various ways to "jailbreak" minds. So being able to control and direct a mind is not a criteria for discriminating between mechanism and a savant.
What most people dance around regarding AI is the matter of the soul. Soul is precisely that ineffable indescribable but clearly universally experienced human phenomena (as far as we know) and it is this soul that is doing the thinking.
And the open questions are (a) is there even such a thing? and (b) if yes, how can we determine if chatter box possess it (or must we drag in God to settle the matter?)
--
p.s. what needs to be stated (although perfectly obvious since it is universal) is that even internally, we humans use thinking as a tool. It just happens to be an internal tool.
Now, the question is this experience of using thought itself a sort of emergent phenomena or not. But as far as LLMs go, it clearly remains just a tool.
"[T]he relationship between sensory deprivation and brainwashing was made public when then director of the National Institute of Mental Health Robert Felix testified before the US Senate about recent isolation studies being pursued at McGill and the NIMH. Felix began by explaining that these experiments would improve medicine’s understanding of the effects of isolation on bedridden or catatonic patients. But when asked whether this could be a form of brainwashing, he replied, ‘Yes, ma’am, it is.’ He went on to explain how, when stimulation is cut off so far as possible the mind becomes completely disoriented and disorganised. Once in this state, the isolated subject is open to new information and may change his beliefs. ‘Slowly, or sometimes not so slowly, he begins to incorporate this [information] into his thinking and it becomes like actual logical thinking because this is the only feed-in he gets.’ He continues, ‘I don’t care what their background is or how they have been indoctrinated. I am sure you can break anybody with this’
"The day after the senate hearing an article entitled ‘Tank Test Linked to Brainwashing’ (1956) appeared in the New York Times and was subsequently picked up by other local and national papers. In Anglophone popular culture, an image took hold of SD as a semi-secretive, clinical, technological and reliable way of altering subjectivity. It featured in television shows such as CBC’s Twighlight Zone (1959), as a live experiment on the BBC’s ‘A Question of Science’ (1957) and the 1963 film The Mind Benders in which a group of Oxford scientists get caught up in a communist espionage plot."
I've tried btw to find any other reference to this testimony to US Senate by Robert Felix, "the director of the National Institute of Mental Health", but it always circles back to this singular Williams article. The mentioned NYTimes article also does not show up for me. (Maybe you have better search foo..) Note John Lilly's paper on the topic apparently remains "classified". Note subsequent matter associated with Lilly and sensory deprivation completely flipped the story about SD: Felix testified that the mind became 'disorganized' and 'receptive' whereas Lilly lore (see Altered States) completely flipped that and took it to a woo woo level certain to keep sensible people away from the topic. /g
This basically touches on that whole “you can’t ever tell people aren’t philosophical zombies. You just feel you aren’t one and will accept they aren’t either.”
The proposition isn't a form of dualism (non-material mind) or features of sentience (pain). It is simply this: thinking is the act of using internal mental tools. It says the main 'black box' isn't the LLM (or any statistical component), there is minimally another black box that uses the internal LLM like tools. The decoder stage of these hypothetical internal tools (of our mind) output 'mental objects' -- like thoughts or feelings -- in the simplest architectural form. It is mainly useful as a framework to shoot down notions of LLMs being 'conscious' or 'thinking'.
Are you saying that it can’t be thinking because it can easily be persuaded and fooled? Or that it can be trained not to speak blasphemous things? Or that it lacks confidence?
It's an illusion. The model generates a sequence of tokens based on an input sequence of tokens. The clever trick is that a human periodically generates some of those tokens, and the IO is presented to the human as if it were a chat room. The reality is that the entire token sequence is fed back into the model to generate the next set of tokens every time.
The model does not have continuity. The model instances are running behind a round-robin load balancer, and it's likely that every request (every supposed interaction) is hitting a different server every time, with the request containing the full transcript until that point. ChatGPT scales horizontally.
The reality the developers present to the model is disconnected and noncontiguous like the experience of Dixie Flatline in William Gibson's Wintermute. A snail has a better claim to consciousness than a call center full of Dixie Flatline constructs answering the phones.
A sapient creature cannot experience coherent consciousness under these conditions.
I don’t follow. Some humans don’t feel pain. But how does that relate to the idea that it’s “thinking?”
My point is not to suggest it’s human. Or sentient. Because those are words that always result in the same discussions: about semantics.
I’m suggesting that we cannot in a meaningful way demonstrate that what it’s doing isn’t what our brains are doing. We could probably do so in many shallow ways that in the months and years ahead be overcome. ChatGPT is an infant.
Each of the so-called interactions with the model are concatenated together for the next set of responses. It’s a clever illusion that you’re chatting with anything. You can imagine it being reloaded into RAM from scratch between each interaction. They don’t need to keep the model resident, and, in fact, you’re probably being load balanced between servers during a session.