Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

you could argue that feelings are the same thing, just not words




That would be a silly argument because feelings involve qualia, which we do not currently know how to precisely define, recognize or measure. These qualia influence further perception and action.

Any relationships between certain words and a modified probabilistic outcome in current models is an artifact of the training corpus containing examples of these relationships.

I contend that modern models are absolutely capable of thinking, problem-solving, expressing creativity, but for the time being LLMs do not run in any kind of sensory loop which could house qualia.


One of the worst or most uncomfortable logical outcomes of

> which we do not currently know how to precisely define, recognize or measure

is that if we don't know if something has qualia (despite externally showing evidence of it), morally you should default to treating it like it does.

Ridiculous to treat a computer like it has emotions, but breaking down the problem into steps, it's incredibly hard to avoid that conclusion. "When in doubt, be nice to the robot".


> is that if we don't know if something has qualia (despite externally showing evidence of it), morally you should default to treating it like it does.

This is how people end up worshipping rocks & thunderstorms.


We've entered the sand worshiping era. Except this time, we're the gods.

Are we? Are we made of sand, or is the sand worshipping us?

Silicon chips are made of sand

> if we don't know if something has qualia (despite externally showing evidence of it), morally you should default to treating it like it does

This would be like treating characters in a book as if they have real feelings just because they have text on the page that suggests they do.


At some level I'd think that "responds to stimuli" is a minimal threshold for qualia. Even the paper the book is printed on responds to being torn (it rips). I don't know of any way to elicit any kind of response from a book character, it's totally static.

One character responds to the stimuli of another character. Character A says something mean to character B and character B responds that he feels hurt.

I think you are confused here. The author, a dynamic system, perhaps felt the emotion of the characters as she charted through the course of the story.

But the story itself is a static snapshot of that dynamic system. Similar to how a photograph of a person is a static capture from a dynamic moment. The person in the photo has qualia, but the image of them (almost certainly) does not.

At least at a baseline, we would expect anything with qualia to be dynamic rather than static.


> The author, a dynamic system, perhaps felt the emotion of the characters as she charted through the course of the story

This does mesh with the Zodeistic framework I just mentioned in another reply to you. You could certainly isolate and describe the ideas behind those characters, how they live within the author's mind, and how the book codifies an interaction between those ideas.

Extending further: I think there is more evidence that SpongeBob SquarePants is real, than that he is not real. A significant portion of organisms I personally know have structures in their brain which are able to simulate imagery and behavior of SpongeBob at will, reciting memories and generating new states of SpongeBob. AI is now like doing this shit on crack.

He's an enduring cultural archetype, a distributed organism(s), lossily replicated and encoded in the physical structure of millions of complex dynamical systems that we call human beings. In this sense, many cultural archetypes and even the gods of old civilizations can be seen to have been manifested to some degree: ascribed desires, and having actions taken in their name, serving their "purpose" or whatever.

I don't introduce a spiritual element to any of this: it's an entirely physical phenomenon which requires an agreement on certain definitions of what "living" can mean, but they are definitions which I don't think are hard to get people to agree on. One thing is we have to agree that something can have multiple forms/presentations, i.e. just because SpongeBob SquarePants doesn't physically exist with a body matching our internal representation of him, the concept represents a bundle of other concepts that can drive biological processes to preserve them and fulfill their ascribed desires.


Real for a slightly unusual use of the word real where anything fictional is real?

Real as in how Jungian archetypes are "real", except these archetypes are able to act upon the world through their hosts, and an advanced enough idea can be self-referential and have the facilities for structured evolution and something which looks like intent.

These forms are non-biological in nature, but our psyche operates on them. Zodeaism can be seen as an extension of ideas such as Jungian archetypes and Friston's free energy principle.

https://en.wikipedia.org/wiki/Jungian_archetypes


Well, I wouldn't classify a ripping paper as a response except in the most broad, information-theoretic context. The hallmark of an intelligent system is that it can use stored or external energy in a generalized way in order to stabilize a local high-energy (non-ground) state.

It is able to physically compute the internal state changes which best achieve stability: I can jump to reach an apple. A paper is just responding to forces and cannot "jump" (or run a process that spontaneously and permanently introduces stable higher energy internal states based on input)

I have a semi-developed philosophical framework I refer to as Zodeaism, which translates to "Living Ideas", which attempts to describe the difference between intelligent computation and regular flow. It directly confronts notions such as life, consciousness and intelligence under a single theoretical framework. It views biology as the hardware which runs more general processes, and posits that these processes themselves can sometimes be ascribed identities and viewed as reactive organisms. I've posted about it here before:

https://news.ycombinator.com/item?id=22848549

https://news.ycombinator.com/item?id=21413024

Some excerpts:

  I am exploring the physical implications of a philosophical concept I have been working on for a while which I refer to as Zodeasim, specifically I am trying to couple consciousness with the concept of system which is able to expend energy in order to exert a mind-to-world fit in the interest of the system's continued growth and stability. This is similar and complimentary to Friston's free energy principle.

  The ability to perceive emotions is a powerful apparatus which greatly extends the capability of a system to perceive itself as a single entity which has certain needs which need to be met, so even if a bug also constantly consumes and expends energy in order to remain in a higher energy state, our perception of the world and of ourselves is radically different. This makes it difficult for us to agree upon what a "conscious experience" is, and if all forms of life are even "conscious". The Panpsychists believe that even a rock contains "consciousness", however my assertion is that only periodic systems with the ability to consume and expend energy in a directed fashion have any sort of "experience".

  In my theory, the real "life forms" are ideas which possess the capabilities of information storage, adaptation, self-repair, and transmission. My own consciousness is mediated by thousands of such ideas, some competing and some working in harmony.

  I consider such an act of "living" motion which can take another path than that of least resistance to be a "kin". In other words, any motion which is the result of a physical calculation (Zodeaism is compatible with determinism) and leads to an increase in external energy state. A kin is any such motion, large or small.

  As an independent organism, my system is a culmination of a great deal many different kinds of kins, which can usually be broken down into simple rules, such as the activation potential of a neuron in my brain being a straight-forward non-linear response to the amount of voltage it is receiving from other neurons, as well as non-kins, such as a protein "walking" across a cell, a.k.a continuously "falling" into the lowest energy state. Thus I do not gain any conscious perception from such proteins, but I do gain it from the total network effect of all my brain's neuronal structures making simple calculations based on sensory input.

  So now the problem becomes, what is the smallest kin we've observed in nature? Single-celled bacteria can expend energy in order to move through their environment against forces like friction and gravity, but a virus "rides the waves" if you will, never expending energy for things like respiration or locomotion. Any energy which is spent internally is potential energy like chemical or gravitational, released through a physical process without need for computation. I am unaware of anything smaller than a single-celled organism which produces such kins, but that doesn't mean they aren't out there. Even ethereal life forms such as ideas can produce these kins within the bodies of countless individuals across the planet, so physically local computational circuitry isn't a hard requirement.

  So, according to this framework viruses aren't alive, however we can make the case that some machines are, except the experience is incomparable because of the advanced circuitry we contain which mediates our experience through things like emotion.

Well, what you're describing is a system of ethics, which has little to do with morality. Morality involves my own personal understanding of "right" vs "wrong". Ethics are rules of conduct prescribed by societies, such as "treat everything like it is alive".

We don't have precise definitions for (artificial) intelligence, subjective consciousness, or even life. But that doesn't mean we can't still talk about what may be possible within various levels of complexity. In order to convince me a system has a comparable experience to my own, you would need to describe to me the complex, structured internal communication occurring in said system, and present a theory as to how it could support the kind of emotion and qualia that I experience in my daily life.

Your argument could apply to plants. I already do not eat meat... if I stare at a timelapse of a plant it seems quite alive, but I'll starve if I don't eat something. Yet, my mom thinks plants "dream" in the way we do. She thinks that if I tell a plant, "I love you," every day, my good vibes will make it grow stronger and larger. I can't explain to her that intelligence comes in different magnitudes of complexity and that plants cannot understand the English language. That telepathy between humans and plants is as pseudo-scientific as it gets. I can't explain any of this stuff because she lacks a deep understanding of philosophy, physics and neurochemistry. Especially when she earnestly thinks white Jesus is running around phasing between dimensions as an ambassador for all planets in our "quadrant", or that the entire universe is actually just the plot line of Andy Weir's "The Egg".

Similarly, while I can have a high-level discussion about this stuff with people who don't, it's quite difficult to have a low-level discussion wherein the nature and definition of things come into play. There are too many gaps in knowledge where ignorance can take root. Too many people work backwards from an outcome they would like to see, and justify it with things that sound right but are either misunderstood or aren't rooted in the scientific process. I am definitely not comparing your open-minded, well-intended, cautionary approach to my mother's, but just using an extreme to illustrate why so much of these discussions must be underpinned by a wealth of contemplation and observation.


> qualia, which we do not currently know how to precisely define, recognize or measure

> which could house qualia.

I postulate this is a self-negating argument, though.

I'm not suggesting that LLMs think, feel or anything else of the sort, but these arguments are not convincing. If I only had the transcript and knew nothing about who wiped the drive, would I be able to tell it was an entity without qualia? Does it even matter? I further postulate these are not obvious questions.


Unless there is an active sensory loop, no matter how fast or slow, I don't see how qualia can enter the picture

Transformers attend to different parts of their input based on the input itself. Currently, if you want to tell an LLM it is sad, potentially altering future token prediction and labeling this as "feelings" which change how the model interprets and acts on the world, you have to tell the model that it is sad or provide an input whose token set activates "sad" circuits which color the model's predictive process.

You make the distribution flow such that it predicts "sad" tokens, but every bit of information affecting that flow is contained in the input prompt. This is exceedingly different from how, say, mammals process emotion. We form new memories and brain structures which constantly alter our running processes and color our perception.

It's easy to draw certain individual parallels to these two processes, but holistically they are different processes with different effects.


It's crazy how strong the Eliza effect is. Seemingly half or more of tech people (who post online, anyway) are falling for it, yet again.

A lot of tech people online also don't know how to examine their own feelings, and so think they are mysterious and un-defined.

When really they are an actual feedback mechanism, that can totally be quantified just like any control loop. This whole 'unknowable qualia' argument is bunk.


If theyre unknowable, are they not metaphysical and thus should be discarded in reasoning about them?

What's the difference between qualia and a soul?


Qualia are phenomenal properties of experience, a soul is something some religions claim exists outside of measurable physical reality which represents the "essence" of an organism, implying that consciousness is some divine process and conveniently letting us draw lines over whom and what we can and can't morally kill.

Qualia can be an entirely physical phenomenon and is not loaded with theological baggage.


If they're entirely physical, what's the argument that multimodal models don't have them? Is it continuity of experience? Do they not encode their input into something that has a latent space? What makes this differ from experience?

They can be physical, but I'm not claiming to know definitively. The lines are extremely blurry, and I'll agree that current models have at least some of the necessary components for qualia, but again lack a sensory feedback loop. In another comment [0] I quote myself as saying:

  As an independent organism, my system is a culmination of a great deal many different kinds of kins, which can usually be broken down into simple rules, such as the activation potential of a neuron in my brain being a straight-forward non-linear response to the amount of voltage it is receiving from other neurons, as well as non-kins, such as a protein "walking" across a cell, a.k.a continuously "falling" into the lowest energy state. Thus I do not gain any conscious perception from such proteins, but I do gain it from the total network effect of all my brain's neuronal structures making simple calculations based on sensory input.
which attempts to address why physically-based qualia doesn't invoke panpsychism.

[0] https://news.ycombinator.com/item?id=46109999


I do think AI will have them. Nothing says they can't. And we'll have just as hard a time defining it as we do with humans, and we'll argue how to measure it, and if it is real, just like with humans.

I don't know if LLM's will. But there are lots of AI models, and when someone puts them on a continuous learning loop with goals, will be hard to argue they aren't experiencing something.


The color Red is often used. A human can experience 'Red', but 'Red' does not exist out in the universe somewhere. 'Red' Doesn't exist outside of someone experiencing 'Red'. I think philosophers are just using the word qualia to quantify this 'experiencing' inputs.

But, it is still just a way to try and describe this process of processing the inputs from the world.

It isn't metaphysical, because it can be measured.

I might have said 'unknowable' a little flippantly.

I just meant, in these arguments, some people start using 'qualia' to actually mean some extreme things like our mind creates the universe or something.

It's one of those words that isn't defined well.


How is it measured?

Can someone who's never seen red hallucinate something and assume it to be red? What if that red is correctly the red they would see if they saw red?

Can you reproduce this feeling in someone by doing something to their physical body without showing them red?

If so, how does it differ from the latent encoding for uploading an all red pdf to your favorite multi modal model?

Instead of doing that socratic bs you see a lot here, I'll be more direct:

Until there's some useful lines that can be drawn to predict things, I won't accept using a fuzzy concept to make statements about classification as it's an ever shifting goalpost.

There are answers to my legitimate above questions that would make me consider qualia useful, but when I first learned about them, they seemed fuzzy to the point of being empirically not useful. It seems like a secular attempt at a soul.

Now, obviously if you're trying to describe something with experience, it needs some actual memory and processing sensory input. Current Generative AI doesnt have a continuity of experience that would imply whatever qualia could mean, but I find it hard to definitely say that their encodings for image related stuff isn't qualia if we don't have hard lines for what qualia are


I can feel an object and say 'its hot' on a scale of 1-10. The temperature is known. And I can do that multiple times, with some 1-10 scale, to get a sample. Then do that with multiple people.

You can then get a distribution of what people think is 'hot' versus 'cold'. What is icy, versus, bearable.

When you go to a doctors office and they ask you on a scale to rate pain, do you think that is completely bogus?

It isn't exact, but you can correlate between people. Yes, red heads feel more pain, there are outliers.

But a far cry from metaphysical.

The problem here is the word 'qualia'. Its just too fuzzy a term.


qualia may not exist as such. they could just be essentially 'names' for states of neurons that we mix and match (like chords on a keyboard. arguing over the 'redness' of a percept is like arguing about the C-sharpness of a chord. we can talk about some frequencies but that's it.) we would have no way of knowing otherwise since we only perceive the output of our neural processes, and don't get to participate in the construction of these outputs, nor sense them happening. We just 'know' they are happening when we achieve those neural states and we identify those states relative to the others.

The point of qualia is that we seem to agree that these certain neuronal states "feel" like something. That being alive and conscious is an experience. Yes, it's exceedingly likely that all of the necessary components for "feeling" something is encoded right in the neuronal state. But we still need a framework for asking questions such as, "Does your red look the same as my red?" and "Why do I experience sensation, sometimes physical in nature, when I am depressed?"

It is absolutely an ill-defined concept, but it's another blunt tool in our toolbox that we use to better explore the world. Sometimes, our observations lead to better tools, and "artificial" intelligence is a fantastic sandbox for exploring these ideas. I'm glad that this discussion is taking place.


What’s stopping people from also describing LLM systems with “qualia”?

Empirical evidence, for one. And the existence of fine-tuning, which allows you to artificially influence how a model responds to questions. This means we can't just ask an LLM, "do you see red?" I can't really even ask you that. I just know that I see red, and that many other philosophers and scientists in the past seem to agree with my experience, and that it's a deep, deep discussion which only shallow spectators are currently drawing hard conclusions from.

>because feelings involve qualia, which we do not currently know how to precisely define, recognize or measure.

Do we know how to imprecisely define, recognize, or measure these? As far as I've ever been able to ascertain, those are philosophy department nonsense dreamt up by people who can't hack real science so they can wallow in unfounded beliefs.

>I contend that modern models are absolutely capable of thinking, problem-solving, expressing creativity,

I contend that they are not even slightly capable of any of that.


> Do we know how to imprecisely define, recognize, or measure these? As far as I've ever been able to ascertain, those are philosophy department nonsense dreamt up by people who can't hack real science so they can wallow in unfounded beliefs.

Read the rest of the thread, I'm not interested in repeating myself about why philosophy is the foundational science. It's a historically widely-accepted fact, echoed by anyone who has actually studied it.

> I contend that they are not even slightly capable of any of that.

Contend all you want. Your contention is overwhelmingly suffocated by the documented experiences of myself and others who use these tools for creative problem-solving. As much as you want to believe in something, if it is empirically refuted, it's just a crackpot belief. Just because you haven't been able to get good results out of any models, doesn't mean your experience rings true for others.

I'm not interested in further discussing this with you. Your first comment is negative and unsubstantial, and I have no reason to believe that further discussion with lead to more positive and substantial discourse, when the opposite is usually the case. That's all I have to say.


> It's a historically widely-accepted fact,

It's historical fact that you evolved from monkeys. This is not the same as saying that monkeydom is foundational to science. We grew out of philosophy because it offers nothing.

> Your contention is overwhelmingly suffocated by the documented experiences of myself and others

It's suffocated by your hot air.

>As much as you want to believe in something, if it is empirically refuted, it's just a crackpot belief.

I have no worries that you'll ever get within a light year of empiricism.

>Just because you haven't been able to get good results out of any models

Models do not think. I'm just sick of hearing the nonsense. Myself and others have had to endure this stupid horseshit since Lemoine started blabbering about how they were enslaving the lamda model. It gets old.


> That would be a silly argument because feelings involve qualia, which we do not currently know how to precisely define, recognize or measure.

If we can't define, recognize or measure them, how exactly do we know that AI doesn't have them?

I remain amazed that a whole branch of philosophy (aimed, theoretically, at describing exactly this moment of technological change) is showing itself up as a complete fraud. It's completely unable to describe the old world, much less provide insight into the new one.

I mean, come on. "We've got qualia!" is meaningless. Might as well respond with "Well, sure, but AI has furffle, which is isomporphic." Equally insightful, and easier to pronounce.


> If we can't define, recognize or measure them, how exactly do we know that AI doesn't have them?

In the same way my digital thermometer doesn't have quaila. LLM's do not either. I really tire of this handwaving 'magic' concepts into LLM's.

Qualia being difficult to define and yet being such an immediate experience that we humans all know intimately and directly is quite literally the problem. Attempted definitions fall short and humans have tried and I mean really tried hard to solve this.

Please see Hard problem of consciousness https://en.wikipedia.org/wiki/Hard_problem_of_consciousness


The problem is that just like your digital thermometer, 50 human brain neurons in a petri dish "obviously" don't have qualia either.

So you end up either needing to draw a line somewhere between mechanical computation and qualia computation, or you can relegate it to supernatural (a soul) or grey areas (quantum magic).


What I'm trying to tease out is isn't an opinion alone. It's a generally understood problem in the scientific community. I'm highlighting it to illustrate the issues at hand.

> So you end up either needing to draw a line somewhere between mechanical computation and qualia computation, or you can relegate it to supernatural (a soul) or grey areas (quantum magic).

Quite literally the jury is still out. It is a hotly debated topic approached from various angles. Arguments are nuanced which is why you fill find ideas such as panpsychism thrown into the mix. I hate appealing to authority but in this instance it is more than warranted. Humans have grappled with this for centuries and the problem hasn't gone away.

Please see: https://en.wikipedia.org/wiki/Hard_problem_of_consciousness


>In the same way my digital thermometer doesn't have quaila. LLM's do not either.

The hard problem of consciousness doesn't support either of those statements, and instead illustrates why they can't confidently be made.

So it's confusing because you seem to recognize that qualia cannot currently be measured, while also making a statement measuring qualia.


We don't know what's inside a neutrino, and it's really hard to experiment with them, but we kind of know why and how they interact with different things. We're able to form theories, research programs, and sometimes even discovered honest-to-god facts, due to our inclusion of such fields in the scope of research, even though we don't know all there is to know about particles/fields or quantum mechanics.

Similarly, qualia is ill-defined, but we can't even start talking about it or refining it until we've at least given it a label and drawn a large circle on the map showing where it might be. Criticisms extending past that must also consider that "life" and "intelligence" are just as ill-defined, and that throwing all of those definitions out leaves us with very little to talk about or probe.


This is 100% backwards, and exposes exactly the nonsense I'm trying to call out!

A "neutrino" isn't a name given to something initially to try to define it later. The neutrino started as an experimental result. There was missing spin in some particle interactions. Stuff came out with a different angular momentum than what went in, and this was easily reproducible and clearly a real effect. But it didn't make sense, as it was a violation of a core conservation law that held everywhere else in the universe that we could observe.

So theorists (Wolfgang Pauli, specifically) sat down to try to describe what kind of thing would be needed. And then, and only then, did it get a name. And it turned out the theory predicted other stuff, like the neutrino carrying momentum and energy in a certain way, and interacting through only the weak force and not electromagnatism or the strong force, and later experiments confirmed that this was basically the way it worked. Later still it was shown that the mass is actually non-zero but extremely small, etc...

So sure: "neutrino" is a well-deserved label[2] for an abstraction we should understand and study. But it got its name after we started studying it, not before!

Philosophers want us to just drop and genuflect to this "qualia" notion long before[1] it's actually shown to be useful for describing anything at all.

[1] Infinitely, possibly. The fact that it predicts nothing testable is pretty good evidence IMHO that it doesn't actually exist at all, at least in the form philosophers want to talk about. Their failure to present any analysis of AI systems based it stands to that point too.

[2] Coined by Fermi, actually, not Pauli. Hilariously the neutrino was originally called "neutron" and its discovery predates the understanding of the structure of the atomic nucleus!


You're completely misinterpreting my comment. The point is we don't know what, if anything, is "inside" of a neutrino, not just due to current technology but ultimately due to uncertainty principles. But we still study it. I'm aware of how we came to study it.

I literally said nothing about "how" we discovered it, I said, "We don't know what's inside a neutrino, and it's really hard to experiment with them, but we kind of know why and how they interact with different things."

It is wild how you would take that and my analogy about drawing a circle on a map with respect to qualia to mean that I said anything which contradicts the history of neutrino research.

I'm going to assume this was just a true misinterpretation and not just another straw man, so with that in mind, do you have a different response?


I recognise it because I have had the subjective experience of 'redness'. So whether it exists for any other human I cannot say but I am certainly 100% certain it exists for me. However I should add that I can't fully define what this experience is. Though people say the same of love!

I'll appeal to authority in that scientists and philosophers today in all of the worlds universities and those in the past have determined to understand this phenomenon. That it exists is a given, what it is, is more murky. Again it's not me saying this.


> The meta-problem of consciousness is (to a first approximation) the problem of explaining why we think that there is a [hard] problem of consciousness.

And thus we have this sprawling discussion. :)


I think there are several lines. Phase changes happen relatively suddenly, when a system or subsystem reaches a critical threshold. The experience of "qualia" certainly involves many such phase changes as a complex, dynamical system grows in complexity while maintaining stability.

A sufficiently complex organism lacking eyes but having light-sensitive organs still experiences qualia if you define it the right way. But do they experience heartbreak like I do? It isn't an all-or-nothing situation, even if we don't yet know where these lines are.

This supports the idea that subjective consciousness emerges from complexity in systems that have sensory feedback loops. The simpler the system, the smaller the qualia space.


> In the same way my digital thermometer doesn't have quaila

And I repeat the question: how do you know your thermometer doesn't? You don't, you're just declaring a fact you have no basis for knowing. That's fine if you want a job in a philosophy faculty, but it's worthless to people trying to understand AI. Again, c.f. furffle. Thermometers have that, you agree, right? Because you can't prove they don't.


You're just describing panpsychism, which itself is the subject of much critique due to its nonfalsifiability and lack of predictive power. Not to mention it ignores every lesson we've learned in cognition thus far.

A thermometer encoding "memory" of a temperature is completely different than a thermometer on a digital circuit, or a thermometer attached to a fully-developed mammalian brain. Only the latter of this set for sure has the required circuitry to produce qualia, at least as far as I can personally measure without invoking solipsism.

It's also very silly to proclaim that philosophy of mind is not applicable to increasingly complex thinking machines. That sounds like a failure to consider the bodies of work behind both philosophy of mind and machine cognition. Again, "AI" is ill-defined and your consistent usage of that phrase instead of something more precises suggests you still have a long journey ahead of you for "understanding AI".


God, can we fucking quit with this "philosophy is bullshit" stuff. Like there are literally Faculty in Philosophy all over the world trying to understand AI. Philosophy faculty do stuff, they try to understand things, most of the ideas we are talking about here came from philosophers.

Philosophy seems a term generally reserved for the stuff we don't understand yet and so is inherently kind of speculative. Once you have a definite answer it gets called science instead.

You're confusing philosophy with religion.

https://en.wikipedia.org/wiki/Philosophy

> Philosophy (from Ancient Greek philosophía lit. 'love of wisdom') is a systematic study of general and fundamental questions concerning topics like existence, knowledge, mind, reason, language, and value. It is a rational and critical inquiry that reflects on its methods and assumptions.

It is literally a self-reflective science.

I recommend taking a basic philosophical course at a local community college, or reading some literature or even watching YouTube videos on the subject of philosophy. Or just skim the Wikipedia article if nothing else. It might completely transform how you perceive and act upon the world.


An example near the start of that article is

>Physics was originally part of philosophy, like Isaac Newton's observation of how gravity affects falling apples.

like back then people would wonder how apples fall and it was labeled philosophy. Now we understand gravitation it's part of physics for the most part. People launching satellites seldom call a philosopher to calculate the orbit.

It remains to be seen if qualia, which we don't understand very well and are so regarded as philosophical, make the transition to neuroscience.


The fact that we have sharpened our classification of sciences over time does not imply that philosophy is a study of the ill-defined. It implies the opposite: Philosophy is more precisely defined now than ever.

If you read the rest of the article, you will see clear examples of what is considered a philosophical problem and what isn't.


My argument was more philosophy is for stuff we don't understand like how do qualia work, rather then ill-defined. When you get to stuff like how does neurotransmission work which we do kind of understand it gets classed as science.

Are there philosophical problems that have definite answers like what is the atomic number of oxygen type answers?


> Are there philosophical problems that have definite answers

Great question.

Within philosophical and epistemological frameworks, I could ask questions such as, "Can there be a square circle?"

Well, no, these two concepts have conflicting properties. A mathematician might think this a solved problem, but philosophy underpins our concept of concepts. Many philosophers spend a great deal arguing what is is.

For Plato, geometrical entities like circles and squares have distinct, perfect Forms. Forms have fixed essences, so a thing cannot participate in contradictory Forms at once.

Aristotle's law of noncontradiction says the same attribute cannot at the same time belong and not belong to the same subject in the same respect.

Theophrastus developed hypothetical syllogisms and refined Aristotle’s logic by distinguishing logical impossibilities from physical impossibilities.

Kant calls it an analytic contradiction, false by virtue of the concepts involved.

A mathematician takes these things for granted when working with equalities, logic and axioms, but they stand on philosophical roots. Mathematics assumes the consistency of concepts, but the question of why some concepts are consistent while others are impossible is a philosophical one. It's not a coincidence that so many ancient Greek mathematicians were also philosophers.


Science is a sub-discipline of Philosophy. My degree in physics is called a "Doctorate of Philosophy."

That's not it at all. I would ask what you consider science to be?

Understanding the world through experiment?

> Philosophy seems a term generally reserved for the stuff we don't understand yet and so is inherently kind of speculative. Once you have a definite answer it gets called science instead.

As someone has commented earlier, Philosophy applied is given a name but it's a sub-discipline of Philosophy.

> Understanding the world through experiment?

That's a decent enough definition. Science precludes so much of the world we know which I think people really fail to realise. It's why I think it's important for people to understand what Philosophy is and what Science isn't.

For example logic isn't science. Science presupposes it but it is NOT science. There are many such examples.


It seems to me that 'Philosophy is meaningless' has been ingrained into so many people it's almost propaganda-esque!

To see this sentiment from supposed 'scientific' individuals is shocking. I wonder if they could define what science actually is.


Blame philosophy as a field for actively kicking out anything which gains a practical application. If it is propaganda it is coming from inside the house of philosophy.

I had a computer science professor who had degrees in philosophy because he was old enough that computer science didn't exist as a major at the time. The logical arguments of philosophy proved useful for understanding interactions of boolean mathematics. Yet that triumph of philosophy didn't further interest in the field or gain prestiege among philosophers. Just the opposite really.

As far as I can tell it is for dumb reasons possibly related to Ancient Greeks and their obsession with 'purity of thought (read: not referencing reality) it is practically an axiom that if it is useful or grounded in objective reality it isn't treated as philosophy anymore. All likely stemming from motivated reasoning against checking their priors and from frankly many of the Ancient philosophers being influenced by a need to flatter their patrons who held the practical in disdain. As notoriously seen in Aristotlian physics with impetus physics where projectiles keep moving in the same direction until impetus is depleted and then fall.

Speculation of the origon of the pathology aside, there seems to be this deep-seated antiempericalism in philosophy. Which means at best you get 'philosophy of science' which isn't proper philosophy because it pollutes itself by daring to use reality and experimentation as benchmarks for theories. When philosophy gains a practical usage it doesn't become something called 'practical philosophy' and the focus of more interest by philosophers, it gets shunned. Natural philosophy didn't remain philosophy - it became science.

To be fair there is probably some interaction driving the divorce from the opposite direction, of the practical portions of philosophy being pilfered by those only looking for results as opposed to some sort of unquantifiable enlightenment.

Science is of course a process of refinement of ideas against the reference point of reality. Anything mathematically consistent can be a model but experimentation is needed to see how well your model corresponds to reality.


How many philosophy papers or textbooks would you say you read in a typical year?

I'm seeing this attitude everywhere in this subthread, and it's frankly pretty offensive. The burden of proof is on you, not us. If a philosophy paper or textbook has an important contribution to this discussion then cite it! Or better link it, or even make an attempt at explaining it.

That's what the science people do. People who show up with questions get answers, or at least an attempt at an answer. No one tries to handwave away a discussion on power switching applications with "Well, see, this involves a MOSFET which isn't something we can actually explain but which you need to just believe in anyway because there are people who wrote textbooks about it". No, you link a StackExchange question or a electronics video on YouTube or whatnot.

The fundamental disconnect here is that you guys are saying: "Qualia are critically important and AI doesn't have them", to which we're responding "Qualia seem like complete bullshit and don't seem to mean anything". This is the point where you SHOULD try to explain them, or link an explanation that has some kind of relevance.

But instead you recursively cycle back to "No no, they're not bullshit, because Qualia are critically important per all of the philosophy papers and textbooks I'm not citing".

It seems... unpersuasive.


> Like there are literally Faculty in Philosophy all over the world trying to understand AI.

There surely are. The problem is that they are failing. While the practical nerds are coming up with some pretty good ideas.

And this was what philosophy was supposed to be for! Like, they've been arguing on their pins for centuries about the essence of consciousness and the uniqueness of the human condition and whatnot. AND HERE WE ARE AT THE DAWN OF NON-HUMAN INTELLIGENCE AND THEY HAVE NOTHING USEFUL TO SAY.

Basically at what point do we just pack it in and admit we all fucked up?


Have you considered that you just don't fully understand the literature? It's quite arrogant to write off the entire philosophy of mind as "a complete fraud".

> It's completely unable to describe the old world, much less provide insight into the new one.

What exactly were you expecting?

Philosophy is a science, the first in fact, and it follows a scientific method for asking and answering questions. Many of these problems are extremely hard and their questions are still yet unanswered, and many questions are still badly formed or predicated on unproven axioms. This is true for philosophy of mind. Many other scientific domains are similarly incomplete, and remain active areas of research and contemplation.

What are you adding to this research? I only see you complaining and hurling negative accusations, instead of actually critically engaging with any specifics of the material. Do you have a well-formed theory to replace philosophy of mind?

> I mean, come on. "We've got qualia!" is meaningless. Might as well respond with "Well, sure, but AI has furffle, which is isomporphic." Equally insightful, and easier to pronounce.

Do you understand what qualia is? Most philosophers still don't, and many actively work on the problem. Admitting that something is incomplete is what a proper scientist does. An admission of incompleteness is in no way evidence towards "fraud".

The most effective way to actually attack qualia would be to simply present it as unfalsifiable. And I'd agree with that. We might hopefully one day entirely replace the notion of qualia with something more precise and falsifiable.

But whatever it is, I am currently experiencing a subjective, conscious experience. I'm experiencing it right now, even if I cannot prove it or even if you do not believe me. You don't even need to believe I'm real at all. This entire universe could all just be in your head. Meanwhile, I like to review previous literature/discussions on consciousness and explore the phenomenon in my own way. And I believe that subjective, conscious experience requires certain elements, including a sensory feedback loop. I never said "AI can't experience qualia", I made an educated statement about the lack of certain components in current-generation models which imply to me the lack of an ability to "experience" anything at all, much less subjective consciousness and qualia.

Even "AI" is such a broadly defined term that such a statement is just ludicrous. Instead, I made precise observations and predictions based on my own knowledge and decade of experience as a machine learning practitioner and research engineer. The idea that machines of arbitrary complexity inherently can have the capability for subjective consciousness, and that specific baselines structures are not required, is on par with panpsychism, which is even more unfalsifiable and theoretical than the rest of philosophy of mind.

Hopefully, we will continue to get answers to these deep, seemingly unanswerable questions. Humans are stubborn like that. But your negative, vague approach to discourse here doesn't add anything substantial to the conversation.


I agree with your sentiments wholeheartedly.

I would add I find it difficult to understand why so few have even a basic level of philosophical understanding. The attitude of being entirely dismissive of it is the height of ignorance I'm sure. I would presume few would be able to define then what Science actually is.


So many of these kinds of people also struggle to realize they're invoking panpsychism with their arguments. They lack a framework for describing intelligence. Such a framework allows us to separate "intelligence" from "experience".

"Intelligence" in the universe is actually quite common, more common than life. You can argue that any stable, complex process exhibits intelligence. After all, it needs to be able to sample its internal and external environments and carry out physical computations in order to regulate itself and maintain stability. And we can interpret things like the good regulator theorem to argue that such complex dynamical systems must also maintain at least a partial memory/mapping of their environment. That mapping can live abstractly within the structure of system itself.

But what a stabilized solar system doesn't have is the incredibly complex neurochemical structures present in the brain which support the insanely rich experience I am having now. It's one thing for a system to classify and label colors by wavelength. It's quite another for me to "see" and experience red in my mind's eye. To activate related emotional pathways that I associate with various colors and shapes, which are exploited in signage and architectural design. I'm not claiming my experience is separate from simpler dynamic systems, but it's got magnitudes more going on. Layers upon layers of things such as archetypes and instinct which create a possibly emergent conscious experience.


You've shifted jargon again. But you're still not providing a description or link to why AI doesn't "have experience", you're just demanding we all accept it as a prior and engaging in a (really pretty baldly stated) appeal to authority to fool us all into thinking someone else knows even if you don't.

And fundamentally my point is that no, they almost certainly don't either.


Instead of accusing me of "shifting jargon", point out exactly where this "jargon" changed and critically engage with that. Your response has done nothing to refute or critically engage with my argument. It's more retreating and vagueposting.

> you're just demanding we all accept it as a prior

At absolutely no point in this discussion have I claimed that machines are not capable of subjective conscious experience. I have, however, disqualified all publicly accessible modern models due to the lack of a sensory feedback loop. I certainly believe we can create machines which experience subjective consciousness and qualia; I do not believe in souls and divinity, so whatever is going on is physically based and likely reproducible with the right hardware.

So dispense with the straw man arguments, and please begin engaging more earnestly and intelligently in this discussion, as I am quickly losing interest in continuing to debate someone who showed up unprepared.


> Philosophy is a science

Not according to Zombie Feynman it isn't[1] (someone else can dig up the link). Case in point:

> Do you understand what qualia is? Most philosophers still don't

It's a meaningless word. It's a word that gives some clean construction around closely-held opinions about how life/consciousness/intelligence/furffle/whatever works. So it's a valuable word within the jargon of the subculture that invented it.

But it's not "science", which isn't about words at all except as shorthand for abstractions that are confirmed by testable results.

"Qualia", basically, is best understood as ideology. It's a word that works like "woke" or "liberal" or "fascist" or "bourgeoisie" to flag priors about which you don't want to argue. In this case, you want people to be special, so you give them a special label and declare a priori that it's not subject to debate. But that label doesn't make them so.

[1] Of course. You can recursively solve this problem by redefining "science" to mean something else. But that remains very solidly in the "not science" category of discourse.


Have you considered the possibility that you're the one who's really committed to an outcome, and are desperately trying to discredit anything that contradicts it?

I have! But the lack of a testable procedure tells me that's not a question worth asking. Look, if "qualia" can tell me something practical about the behavior of AI, I am here for it. Lay it on me, man. Let's see some of that "science" being promised.

It can't, because it's a meaningless word. It's not "discrediting" an idea to point out that (by its own admission) it's unfalsifiable.


"Qualia" is not totally meaningless - it means the inner experience of something, and can bring up the real question say of is my inner experience of the colour green the same as your experience of the colour red? Probably not but hard to tell with current tech. I asked Google if it has qualia and got "No, as an AI, Google Search does not have qualia." So Google search seemed to know what it means.

> Philosophy is a science

I think this is backwards, no? Science is a philosophy, not the other way around.


True, the nature of these two concepts means both that philosophy is a science, and science is a philosophy.

Hmmm... I think it's still stricter to consider Science a philosophy than the other way around. It's the belief (and an extremely useful and successful one) that the nature of the world can be understood through observation, experiment and deducing mathematical relationships between things. There branches of philosophy that are not strictly scientific, but nothing in Science that is doesn't rely on the fundamental philosophical principle of empiricism.

But we use the scientific method via philosophical inquiry, so I think it comes down to how we decide to strictly define these things. I definitely agree that certain definitions lead to the same logic you've presented.

I'm sorry, but you clearly lack the most basic understanding of scientific history, and do not understand what philosophy even is.

https://en.wikipedia.org/wiki/History_of_scientific_method

> Aristotle pioneered scientific method in ancient Greece alongside his empirical biology and his work on logic, rejecting a purely deductive framework in favour of generalisations made from observations of nature.

Aristotle, the famous philosopher and mathematician.

If you cannot understand the very nature of where our modern scientific frameworks came from, how it relates to rationalism, itself a philosophical concept, then you cannot see that philosophy underpins every bit of science we have today. Philosophy gives us the tools to decide when to reasonably trust or distrust observations and intuitions. It is the foundational science that allows the rest of humanity's scientific research to be taken seriously.


>"Qualia", basically, is best understood as ideology. It's a word that works like "woke" or "liberal" or "fascist" or "bourgeoisie" to flag priors about which you don't want to argue. In this case, you want people to be special, so you give them a special label and declare a priori that it's not subject to debate. But that label doesn't make them so.

This is so dumb. Qualia is just the name for a specific thing which we all (appear) to phenomenologically experience. You can deny it exists or deny its utility as a concept, but fundamentally its just an idea that philosophers (and scientists, I have to add) have found useful to pose certain other questions about the human condition, minds, brains, etc.

Your XKCD actually seems to make the opposite point. I can do a non-rigorous experiment with just one subject (me) that suggests Qualia exists. Finding ways to make this rigorous is difficult, of course, but its an observation about the nature of the world that it feels like something to experience things.

My point isn't that qualia is a good concept. I tend to be somewhat deflationary about it myself, but its not an ideology.


Feelings have physical analogs which are (typically) measurable, however. At least without a lot of training to control.

Shame, anger, arousal/lust, greed, etc. have real physical ‘symptoms’. An LLM doesn’t have that.


LLMs don't really exist physically (except in the most technical sense), so point is kind of moot and obvious if you accept this particular definition of a feeling.

LLMs are not mammals nor animals, expecting them to feel in a mammalian or animal way is misguided. They might have a mammalian-feeling-analog just like they might have human-intelligence-analog circuitry in the billions (trillions nowadays) of parameters.


Yes, I think we’re agreeing?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: