I tend to find Searle's discussions on AI to be overly simplistic. I find that scenario highly unlikely. Regardless of how you choose to interpret the research on the neuroscience of where decisions to action originate, it is clear that the conscious mind is extremely good at taking full credit for events it had no part or only a partial role in.
Such an "AI parasite" has plenty of machinery inbuilt in the brain to take advantage of in order to have full control without causing dissonance in the host. Indeed, not only will it likely be a path of least resistance, continued dissonance may cause enough mental instability in the host as to disrupt some emergent balance in the brain. This would have an overall negative effect on the user AI as to motivate it to choose to not make the extra effort required to have the human suffer such a disconnect.
The question then would be in whether thinking of the AI as other is valid instead of expanding the notion of personal agency to accept such symbiotes.
Such an "AI parasite" has plenty of machinery inbuilt in the brain to take advantage of in order to have full control without causing dissonance in the host. Indeed, not only will it likely be a path of least resistance, continued dissonance may cause enough mental instability in the host as to disrupt some emergent balance in the brain. This would have an overall negative effect on the user AI as to motivate it to choose to not make the extra effort required to have the human suffer such a disconnect.
The question then would be in whether thinking of the AI as other is valid instead of expanding the notion of personal agency to accept such symbiotes.
http://en.wikipedia.org/wiki/Neuroscience_of_free_will#Manip...
http://en.wikipedia.org/wiki/Neuroscience_of_free_will#The_p...