Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have been following someone's critical chapter-by-chapter criticism of HPMOR for some time now, which might be illustrative. It is admittedly quite long: http://su3su2u1.tumblr.com/tagged/Hariezer-Yudotter/chrono

Among other things, I've often seen people try to sell HPMOR as, "Harry Potter fanfiction in which Harry applies the scientific method to the magical world of Harry Potter!" And that shows up a little [^1] but is by and large not the content of the work. Harry does very few experiments to verify his hypotheses and is actually a broadly incurious character. For example: Harry is nominally interested in eliminating death, but never once investigates the many magical mechanisms which seem to eliminate death, preferring to just talk about it instead. Similarly, he comes up with hypotheses as to how magic works, but never bothers to investigate them beyond speculation.

Additionally, a lot of the "solutions" to problems Harry has are relatively unsatisfying: Harry often circumvents a problem by "clever" applications of the rules, but because the rules of magic as given both in Rowling and in Yudkowsky are ill-defined and even self-contradictory, this seems less clever and more like arbitrary author fiat. Harry is also given a small time machine early on in the plot, and so more often than not, the solution to his problems is, "Use the time-turner again."

Finally, I personally found the writing style to be generally in need of editing—not terrible but certainly not polished—but I consider that to be a smaller problem than the above plot-related issues. EDIT: All this doesn't mean that you shouldn't like it! These are just problems that I had with the work.

[^1]: The bit where Harry tries to experimentally prove that P=NP is probably my favorite part of the entire thing.



While I think the line you were fed about its premise was incorrect, I don't think you read as far as chapter 28, did you?

The thing is... HPMOR isn't about the scientific method. It's about rationality. It's not a story about distilling a universal theory of magic or triumphing over death; it's a story about a kid who's had a rationalist education fumbling his way through a world that refuses to actually make sense.


I read well through chapter 70. I did imply that scientific investigation shows up a little, after all! I agree that the line people gave me wasn't correct, but I've heard that line enough that I felt it was appropriate to dispel that particular notion.

Perhaps also germane is that I am not personally a believer in Less Wrong-style rationality, and so the intellectual content of the work was not relevant to me except in the detached way that philosophical or political or religious schools can be interesting to non-adherents (which is one reason I read as much as I did.) Whether the story accomplishes its actual goal, then, is something I can't judge, but I can say that, as a non-rationalist (irrationalist?) I didn't find it to be a good or engaging story.

But as I said in my previous comment, this is my own reaction, and I include it for informative reasons, not because I think others should necessarily share it!


It wasn't apparent in my own comment, but I don't put much stock in the "intellectual content of the work" myself. I dislike LessWrong and self-described rationalists, including Yudkowsky as far as I know him. I agree with su3su2u1 that, insofar as Yudkowsky intended HPMOR as a pedagogical vehicle, it was not really well done.

That said, it is rare that a work of fanfiction manages to be a legitimate deviation and self-contained work that both comments and reflects upon its originator usefully. Generally speaking, I prefer to give all fiction the benefit of the doubt: whatever its failures, it is hard to ignore that it was successfully written and as someone who has tried my hand at it, I know firsthand the challenge overcome. There's a "Man in the Arena" quote insertion that belongs here. I did find it to be a good and engaging story, though, if painfully in need of an editor, which is not a unique remark in the land of fanfiction.

If you want to really have fun, compare HPMOR with the Left Behind series, which qualifies as fanfiction as much as anything. There are probably some fascinating parallels to be found with their relationship to their original works and author intentions and the like. You can find a much higher quality counterpart to su3su2u1 in Fred Clark's Slacktivist blog, which dissects it on a page-by-hilarious-page basis.


May I ask what about "Less Wrong-style rationality" you disagree with?


This is the kind of thing which could very easily turn into a flame war, and is much further off-topic. I'm not interested in that kind of argument right now, so all I will say in this thread is this:

My personal experience with Less Wrong-style rationalism, to simplify the situation aggressively, is that it has a core of good, useful tools (Bayesian reasoning, strict positivism, utilitarianism) that I have no problem with. However, when pushed too far, those tools tend to break down—but the rationalist answer to that breakdown is all too often to embrace the model and discount the reality. This general refusal to regard their core tools with suspicion results in beliefs which are paradoxically irrational: when faced with e.g., utilitarianism condoning torture to prevent mild discomfort, the rationalist response is not, "Perhaps human experience does not map straightforwardly to integers—we should re-examine our tools," but rather, "As our mathematical tools are of course correct, we must believe in this conclusion." This belief in math-over-matter is a major part (though not the only factor) in my skepticism towards the kind of rationalism promoted by Less Wrong.


And yet no one has really made an convincing argument of why the conclusions of utilitarianism contradict the axioms of utilitarianism. And no one seems to offer the rather simple solution that utility is non-linear (so dust in ones eye is really a quite small fraction as bad as a century of torture), and people who mock Eliezer's utilitarianism hypocritically do so on sweatshops built&recycled computers, clothes, and food.

The strongest argument against rationalist utilitarianism seems to be that people don't like the cognitive dissonance it imposes on hypocrites.


The linearity of utility is irrelevant to the dust specks or torture problem because you can always increase the number of people to receive dust specks so the utility of torturing a single individual for 50 years is higher.

http://lesswrong.com/lw/n3/circular_altruism/

And calling people hypocrites because things like involuntary organ donation gives them "cognitive dissonance" is absurd. Also regarding hypocrisy, note that Yudkowski works for an organization whose goal is to protect humanity from Skynet, while people in Africa are starving. (I'm not personally attacking him, he can work on whatever he likes, and I think governments of wealthy nations have both the resources and responsibility to alleviate starvation, and individual efforts are mostly pissing in the wind. But parent poster brought up sweatshops &c. so I went there.)


The really funny part of HPMOR is that, while it was intended to demonstrate rationalism, what I got out of it was that rationalism doesn't actually work.

I'm unwilling to post spoilers, which makes this unfairly undiscussable, but the meaning of the original prophecy did not--could not--come down to rationalism and could not for reasons that were repeated a few times through the fic. I can't tell if this was intentional on Yudkowsky's part, and I don't really care, since it really changes nothing about everything else.

For the record, though, when I disparage utilitarianism, I promise it isn't just "Eliezer's".


>And yet no one has really made an convincing argument of why the conclusions of utilitarianism contradict the axioms of utilitarianism.

No, they've said "this is a reductio ad absurdum, perhaps reality is a bit more complicated than that." You don't get to assert your assumption - that simplistic utilitarianism works as advertised - as evidence.

Consider, for instance, minmaxing as an alternative. (Like real-life AI work tends to.) What answer does that give?



Thanks for explaining your views. My intent was not to spark a flame war, I've heard quite a few critiques of less wrong but most of the authors expressed their disagreement without explaining what they disagreed with (perhaps it was obvious to them).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: