> I think this is wrong, because in general, when analogy is good, it is typically good because of the tendency toward allowing for reflex responses. It can't be good and bad for the same reason. It needs to be for a different reason or there isn't logical consistency.
That's some weird reasoning. Human emotions are crucial to human existence but we know they also can have bad results. But when emotions are useful to us, it's because we know other people will react similarly to us in a consistent manner. When they're bad, it's generally because someone understands and is using a reaction to get something unrelated to our personal needs and desires.
>> ...programmed to maximize the profits of Microsoft
> This isn't the objective function of the model. That it might be an objective for people who worked on it does not mean that its responses are congruent with actually doing this.
It will be. You can observe the evolution of Google's search system and it has converged to it's current of pushing stuff to sell before everything else. The charter of a public company is maximizing returns to share holders. That is the task of the entire organization
--> You're fixing of my argument is OK but it's pretty easy to imagine it and others from the initial argument imo.
> It will be. You can observe the evolution of Google's search system and it has converged to it's current of pushing stuff to sell before everything else. The charter of a public company is maximizing returns to share holders. That is the task of the entire organization
Yeah, probably it will evolve in that direction. I could imagine that happening.
> That's some weird reasoning.
In the AI textbooks I've read, reflex is defined in the context of a reflex agent. You would have sentences like "a reflex agent reacts without thinking" and then an example of that might be "a human who puts their hand on a stove yanks it away without thinking about it" and this is rational because the decision problem doesn't call for correct cognition - it calls for minimization of response time such that the hand isn't burned. To me, when you say reflex decision making is the reason for the danger, it seems to me that this is an inconsistent reason because for other decision making problems, reflex is a help, not a hindrance. I do not consider it wrong to or weird reasoning to use definitions sourced from AI research. I think, given your confusion at my post, you probably weren't intending to argue that being faster means being wrong, but the structure of your reply read that way to me because of the strong association I have for that word and reflex as it relates to optimal decision making by an AI under time constraints. I also think is what you actually said, even if you didn't intend to, but I don't doubt you if you say you meant it another way, because language is imprecise enough that we have to arrive on shared definitions in order to understand each other and it is by no means certain that we start on shared definitions.
I'm also kind of way too literal sometimes. Side-effect of being a programmer, I suppose. And I take this subject way too seriously, because I agree with Paul Graham about surface area of a general idea multiplying impact potential. So I'm trying really really really hard to think well - uh, for example, I've been thinking about this almost continuously whenever I reasonably could ever since my first reply, unable to stop.
It is 1:32 AM for me. I'm taking multiple continuous hours of thinking about this and writing about this and trying to be clear in my thinking about this, because I find it so important. So hopefully that gets across how I am as a person - even if it makes me seem really weird.
> You're fixing of my argument is OK but it's pretty easy to imagine it and others from the initial argument imo.
I'm really trying to drive at the deeper fundamental truths. I feel like logic and analogy are really important and profound and worthy of countless hours of thought about and that the effort will ultimately be rewarded.
You would have sentences like "a reflex agent reacts without thinking" and then an example of that might be "a human who puts their hand on a stove yanks it away without thinking about it" and this is rational because the decision problem doesn't call for correct cognition - it calls for minimization of response time such that the hand isn't burned.
We have to be specific about what we're discussing. The human reflex to pull away from a hot stove serves the human, the human gets a benefit from the reflex in the context of a world that has hot stoves but doesn't have, say, traps intended to harm people when they manifest the hot-stove reaction.
Some broad optimization algorithm, if it trained or designed actors, might add a heat reflex to the actors, in the hot-stove-world-context and these actors might also benefit from this. The action of the optimization algorithm would qualify as rational. A person who trained their reflexes could similarly be considered rational. However, the reflex itself is not "rational" or "good" but simply a method or tool.
Which is to say you seem to be implicitly stuck on a fallacious argument "since reflexes are 'good', any reflex reaction is 'good' and 'rational'". And that is certainly not the case. Especially, the modern world we both live in often presents people with communication intended to leverage their reflexes to benefit of the communicator and often against the interests of those targeted. Much of it is advertising and some of it is "social engineering". The social engineering example is something like a message from a Facebook friend saying "is this you? with a link", where if you click the link, it will hack your browser and use it to send more such links as well as other harmful-to-you actions.
It seems like your arguments suffer from failing to make "fine" distinctions between categories like "good", "rational", and "useful-in-a-situation". They are valid things but aren't the same. Analogies can be useful but they aren't automatically rational or good. You begin with me saying "this isn't inherently good or rational though it can be useful-in-a-situation and you think I'm saying analogies aren't good, are bad, which I'm not saying either".
You seem to have thought I was talking about the utilities of `f` but I wasn't. I not only see the distinction you are talking about, but I'm making still further distinctions. To make it easier to avoid confusion, I'm just going to write some code to explain the distinction rather than trying to use just language to do so.
# Analogy is basically saying things are similar. For example, a good analogy to a function is that same function, but cached.
analogy = memoized(f)
# This is a good analogy because of the strong congruence
[f(x) for x in domain(f)] == [analogy(x) for x in domain(f)]
# But the thing that makes us want to use the analogy is that there are differences
benchmark(f, somePropertyToMeasure) != benchmark(analogy, somePropertyToMeasure)
# For example, in the use of caches in particular, we often resort them to for the time advantage of doing so
benchmark(f, timeMetric) > benchmark(analogy, timeMetric)
# The danger of an analogy breaking down comes when the analogy doesn't actually hold
bad_analogy = memoized(impure_f)
# Because the congruence doesn't hold
[impure_f(x) for x in domain(impure_f)] != [bad_analogy(x) for x in domain(impure_f)]
# All of this matters to the discussion of anthropomorphism because
isinstance(Analogy, anthropomorphism)
isinstance(Analogy, analogy)
Okay, now that you see the structure I'm looking at, lets go back to your comment. You said "because reflex considerations" and I took you to be talking about speed.
Imagine you were watching someone be interviewed about caches. They get tossed the question: "when cache lookups are done what is the typical danger" and they hit the question back with "because they are fast". If you then commented that it isn't true, because typically when we use caches we do it because of the performance benefit of doing so that would be a valid point. Now, since caches are analogies and since anthropomorphism is an analogy, they are going to have similar properties. So the reasonableness of this logic with respect to caches says something about the reasonableness of this logic with respect to anthropomorphism.
Hopefully you can see why I think my reasoning is not weird now and hopefully you agree with me? I've tried to be more specific to avoid confusion, but I'm assuming you are familiar with programming terms like memoization and mathematical terms like domain.
That's some weird reasoning. Human emotions are crucial to human existence but we know they also can have bad results. But when emotions are useful to us, it's because we know other people will react similarly to us in a consistent manner. When they're bad, it's generally because someone understands and is using a reaction to get something unrelated to our personal needs and desires.
>> ...programmed to maximize the profits of Microsoft
> This isn't the objective function of the model. That it might be an objective for people who worked on it does not mean that its responses are congruent with actually doing this.
It will be. You can observe the evolution of Google's search system and it has converged to it's current of pushing stuff to sell before everything else. The charter of a public company is maximizing returns to share holders. That is the task of the entire organization
--> You're fixing of my argument is OK but it's pretty easy to imagine it and others from the initial argument imo.