It seems likely, but the whole point of censoring extreme comments is to make those commenters an out group. Comment filtering would have edge cases, just like spam and porn filtering does. Do the dangers outweigh the value of pushing people who speak like this into an out group?
I don't know how you interpreted Torgo's comment as worrying that inappropriate/extreme comments would be treated as an out-group. He/she said that he/she suspects that human-rating of comments would be based more on their (dis)agreement with the popular view (particularly on controversial topics) as opposed to the extremeness/inappropriateness of the content. There's a massive difference between a comment being 100% civil in their dissent, as in "I'm not convinced that you can attribute those causes to that effect, given XYZ" and "DIE YOU ASSHOLE YOU'RE WRONG I'LL KILL YOU". Torgo's suspicion is that filtering would be done based on disagreement vs objectively offensive/uncivil content.
My point is that a comment filtering system is consciously based on the identification and exclusion of a group, which we'll call the out group. Of course it's limits will be debatable, just as the definition of spam is debatable.
If what you're excluding is political speech of a sort, then someone is going to be able to accuse the system of censorship, no matter where the line is drawn. It's inherent in the system's goals.
Personally, I think that's a point where any system is going to be rightfully challenged and examined critically, but I think it's still worth doing on a greater good basis. Spam filtering works, it makes my inbox a more tolerable place. It's a good idea to extend it to other types of crappy speech that makes other peoples' lives worse.
I have just noticed that in many online communities, you couldn't easily filter on extremism because, based on the community values, expressing the wrong opinion is "trolling" but responding to that opinion "die in a fire you fucking asshole" is a top-rated comment. I don't even think these are outliers, a lot of communities are run like this.
Also, possibly this comment would be a false positive in some places because I included extreme text as an example.
Your point is very well taken. The choice of whose community values to use is an important one. If your goal is to try and exclude some actually popular but perhaps embarrassing opinions, then you are likely to need to incorporate some of the values of other communities where this sort of speech would be filtered.
Applying r/SRS standards to r/RedPill would be an extreme example of this, and a good thought experiment. Could you model what kind of karma score a RedPill post would get on SRS? How would it change what a discussion page looked like?
China has developed a lot of technology and expends a lot of man hours to censoring what it considers to be offensive opinions, and yet, every day, people find new and interesting ways to express those same offensive opinions.
It's as much of a lost cause as DRM. You can't silence people until and unless you're willing to kill them, and possibly not even then.