One of the worst or most uncomfortable logical outcomes of
> which we do not currently know how to precisely define, recognize or measure
is that if we don't know if something has qualia (despite externally showing evidence of it), morally you should default to treating it like it does.
Ridiculous to treat a computer like it has emotions, but breaking down the problem into steps, it's incredibly hard to avoid that conclusion. "When in doubt, be nice to the robot".
> is that if we don't know if something has qualia (despite externally showing evidence of it), morally you should default to treating it like it does.
This is how people end up worshipping rocks & thunderstorms.
At some level I'd think that "responds to stimuli" is a minimal threshold for qualia. Even the paper the book is printed on responds to being torn (it rips). I don't know of any way to elicit any kind of response from a book character, it's totally static.
One character responds to the stimuli of another character. Character A says something mean to character B and character B responds that he feels hurt.
I think you are confused here. The author, a dynamic system, perhaps felt the emotion of the characters as she charted through the course of the story.
But the story itself is a static snapshot of that dynamic system. Similar to how a photograph of a person is a static capture from a dynamic moment. The person in the photo has qualia, but the image of them (almost certainly) does not.
At least at a baseline, we would expect anything with qualia to be dynamic rather than static.
> The author, a dynamic system, perhaps felt the emotion of the characters as she charted through the course of the story
This does mesh with the Zodeistic framework I just mentioned in another reply to you. You could certainly isolate and describe the ideas behind those characters, how they live within the author's mind, and how the book codifies an interaction between those ideas.
Extending further: I think there is more evidence that SpongeBob SquarePants is real, than that he is not real. A significant portion of organisms I personally know have structures in their brain which are able to simulate imagery and behavior of SpongeBob at will, reciting memories and generating new states of SpongeBob. AI is now like doing this shit on crack.
He's an enduring cultural archetype, a distributed organism(s), lossily replicated and encoded in the physical structure of millions of complex dynamical systems that we call human beings. In this sense, many cultural archetypes and even the gods of old civilizations can be seen to have been manifested to some degree: ascribed desires, and having actions taken in their name, serving their "purpose" or whatever.
I don't introduce a spiritual element to any of this: it's an entirely physical phenomenon which requires an agreement on certain definitions of what "living" can mean, but they are definitions which I don't think are hard to get people to agree on. One thing is we have to agree that something can have multiple forms/presentations, i.e. just because SpongeBob SquarePants doesn't physically exist with a body matching our internal representation of him, the concept represents a bundle of other concepts that can drive biological processes to preserve them and fulfill their ascribed desires.
Well, I wouldn't classify a ripping paper as a response except in the most broad, information-theoretic context. The hallmark of an intelligent system is that it can use stored or external energy in a generalized way in order to stabilize a local high-energy (non-ground) state.
It is able to physically compute the internal state changes which best achieve stability: I can jump to reach an apple. A paper is just responding to forces and cannot "jump" (or run a process that spontaneously and permanently introduces stable higher energy internal states based on input)
I have a semi-developed philosophical framework I refer to as Zodeaism, which translates to "Living Ideas", which attempts to describe the difference between intelligent computation and regular flow. It directly confronts notions such as life, consciousness and intelligence under a single theoretical framework. It views biology as the hardware which runs more general processes, and posits that these processes themselves can sometimes be ascribed identities and viewed as reactive organisms. I've posted about it here before:
I am exploring the physical implications of a philosophical concept I have been working on for a while which I refer to as Zodeasim, specifically I am trying to couple consciousness with the concept of system which is able to expend energy in order to exert a mind-to-world fit in the interest of the system's continued growth and stability. This is similar and complimentary to Friston's free energy principle.
The ability to perceive emotions is a powerful apparatus which greatly extends the capability of a system to perceive itself as a single entity which has certain needs which need to be met, so even if a bug also constantly consumes and expends energy in order to remain in a higher energy state, our perception of the world and of ourselves is radically different. This makes it difficult for us to agree upon what a "conscious experience" is, and if all forms of life are even "conscious". The Panpsychists believe that even a rock contains "consciousness", however my assertion is that only periodic systems with the ability to consume and expend energy in a directed fashion have any sort of "experience".
In my theory, the real "life forms" are ideas which possess the capabilities of information storage, adaptation, self-repair, and transmission. My own consciousness is mediated by thousands of such ideas, some competing and some working in harmony.
I consider such an act of "living" motion which can take another path than that of least resistance to be a "kin". In other words, any motion which is the result of a physical calculation (Zodeaism is compatible with determinism) and leads to an increase in external energy state. A kin is any such motion, large or small.
As an independent organism, my system is a culmination of a great deal many different kinds of kins, which can usually be broken down into simple rules, such as the activation potential of a neuron in my brain being a straight-forward non-linear response to the amount of voltage it is receiving from other neurons, as well as non-kins, such as a protein "walking" across a cell, a.k.a continuously "falling" into the lowest energy state. Thus I do not gain any conscious perception from such proteins, but I do gain it from the total network effect of all my brain's neuronal structures making simple calculations based on sensory input.
So now the problem becomes, what is the smallest kin we've observed in nature? Single-celled bacteria can expend energy in order to move through their environment against forces like friction and gravity, but a virus "rides the waves" if you will, never expending energy for things like respiration or locomotion. Any energy which is spent internally is potential energy like chemical or gravitational, released through a physical process without need for computation. I am unaware of anything smaller than a single-celled organism which produces such kins, but that doesn't mean they aren't out there. Even ethereal life forms such as ideas can produce these kins within the bodies of countless individuals across the planet, so physically local computational circuitry isn't a hard requirement.
So, according to this framework viruses aren't alive, however we can make the case that some machines are, except the experience is incomparable because of the advanced circuitry we contain which mediates our experience through things like emotion.
Well, what you're describing is a system of ethics, which has little to do with morality. Morality involves my own personal understanding of "right" vs "wrong". Ethics are rules of conduct prescribed by societies, such as "treat everything like it is alive".
We don't have precise definitions for (artificial) intelligence, subjective consciousness, or even life. But that doesn't mean we can't still talk about what may be possible within various levels of complexity. In order to convince me a system has a comparable experience to my own, you would need to describe to me the complex, structured internal communication occurring in said system, and present a theory as to how it could support the kind of emotion and qualia that I experience in my daily life.
Your argument could apply to plants. I already do not eat meat... if I stare at a timelapse of a plant it seems quite alive, but I'll starve if I don't eat something. Yet, my mom thinks plants "dream" in the way we do. She thinks that if I tell a plant, "I love you," every day, my good vibes will make it grow stronger and larger. I can't explain to her that intelligence comes in different magnitudes of complexity and that plants cannot understand the English language. That telepathy between humans and plants is as pseudo-scientific as it gets. I can't explain any of this stuff because she lacks a deep understanding of philosophy, physics and neurochemistry. Especially when she earnestly thinks white Jesus is running around phasing between dimensions as an ambassador for all planets in our "quadrant", or that the entire universe is actually just the plot line of Andy Weir's "The Egg".
Similarly, while I can have a high-level discussion about this stuff with people who don't, it's quite difficult to have a low-level discussion wherein the nature and definition of things come into play. There are too many gaps in knowledge where ignorance can take root. Too many people work backwards from an outcome they would like to see, and justify it with things that sound right but are either misunderstood or aren't rooted in the scientific process. I am definitely not comparing your open-minded, well-intended, cautionary approach to my mother's, but just using an extreme to illustrate why so much of these discussions must be underpinned by a wealth of contemplation and observation.
> which we do not currently know how to precisely define, recognize or measure
is that if we don't know if something has qualia (despite externally showing evidence of it), morally you should default to treating it like it does.
Ridiculous to treat a computer like it has emotions, but breaking down the problem into steps, it's incredibly hard to avoid that conclusion. "When in doubt, be nice to the robot".