Hacker News new | past | comments | ask | show | jobs | submit login

I believe a lot of odd results that come out of hypotheticals lie with the fact that they're hypotheticals.

The trolley problem makes an inherent assumption that value can be summed by adding up people (i.e., more people > less people), and therefore there's only one truly correct answer, but that's not how decisions are actually made in the real world and there are very good reasons for that.

For instance: why is there a group of people and then the separate individual? Does the group have something in common? Is there something special about the individual? Someone made a judgment call when they created the situation. The end result is now unknown, and my answer would be that the decision therefore doesn't matter.

Choosing to do something would require performing another judgment call, which is actually very problematic. This is demonstrated fairly well by one of the reframings of the problem: the one where a surgeon kills a healthy person to save 5 dying patients. A world in which such things happen would be considered quite terrible by many, yes this seems to be a valid implementation of the trolley problem. The surgeon could then take his 5 patients and the one healthy person and recreate a trolley scenario and then let someone else deal with it. The correct answer, according to a person who doesn't want healthy travelers sacrificed for people who need organs, would be to not touch the lever. Yet, they do not have access to this information, so they do not know what the end result is. So the decision doesn't matter.

The responsibility is indeed with whoever has created the situation. Another party getting involved may make it better or worse by sheer luck but "more people > less people" is, in my opinion, a horrible philosophy and this implicit assumption should be thrown out of the problem. Once you do that, though, the problem disintegrates.

I would generally be wary of critique of intuition based on failure in hypotheticals. Intuition wasn't designed around hypotheticals and doesn't believe in them.




Everything both you and parent poster said boils down to "run away from the question because I don't like either of the answers." Which is a very human response, but you then throw the baby out with the bath water.

To be really useful, you need to consider these thought experiments in the least convenient world - assume that there is explicitly no way to avoid the central point (http://lesswrong.com/lw/2k/the_least_convenient_possible_wor... for further reading). So in this case, it doesn't matter 'whose fault' the predicament is. You don't know anything about the individuals involved except the number killed by each of two choices. "Optimise net lives saved" is indeed a horrible strategy, but under the least convenient possible circumstances can you come up with a better one?

Your comment on intuition failing on hypothetical scenarios is on the mark, and that's the point of this kind of scenario - to see in what way intuition fails, and how it will fail in different directions based on presentation (eg. we'll passively let 5 people die rather than actively killing one, but we'll save 5 people even if it involves indirectly killing one.)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: