Hacker News new | past | comments | ask | show | jobs | submit login

I guess you could argue Facebook should be shutdown as parlor was. Although parlor was only used for hate, Facebook didn't do anything to stop it either.

I think a big issue is how facebooks algo (and others) are built for maximum profit which at the same time also radicalizes people in their filter bubble.




> Although parlor was only used for hate

I encourage everyone to keep an eye out for these casual little calumnies, presented without evidence.


Right. I used parler to do such things as follow EWTN. I stopped using Twitter for this sort of thing because I figured that -- with EWTN's stance on the trans issue -- it was only a matter of time before they were banned (there had been talk of banning JK Rowling for similar tweets at the time, so it's a reasonable fear). I joined parler to not have to worry about this. This constant belittling of anyone who does not completely toe the line of whatever ethical system has taken over Twitter to classify people like me -- a brown man with immigrant parents -- as white supremacists is scary, dystopian, and insane.


As a counterexample you claim to have joined Parlor because Twitter wouldn't let you hate transsexuals openly? This is supposed to support the premise that Parlor had a use for something other than to amplify hate speech?


This is where it gets interesting and the devil really is in the detail on this one. If you define hate speech according to this:

The International Covenant on Civil and Political Rights (ICCPR) states that "any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility, or violence shall be prohibited by law".

You are saying that this group cannot broadcast a message if that message tells transsexual people that they cannot be transsexual and be approved of by this groups god. Yet broadcasting such messages is a core part of their religion. So are you now discriminating against that religion in an effort to avoid discriminating against transsexual people?

If that is so then your comment could be classed as hate speech. I don't think it should be but I do think this highlights the risks and the need to limit what we consider hate speech. That to err on the side of free speech is the safer option.


People have used their religions to give a pass to discrimination for a long time[1]. Today's scapegoat is trans people, but a few decades ago it was miscegenation. People discriminated against mixed-race couples and said it was a core tenet of their religion[1]:

> Almighty God created the races white, black, yellow, malay and red, and he placed them on separate continents. And but for the interference with his arrangement there would be no cause for such marriages. The fact that he separated the races shows that he did not intend for the races to mix.

Surely if we could handle situations like the above that were leveled against mixed-race couples half of a century ago, we can handle the same being leveled at trans people today.

When it comes to free speech and freedom of religion, the ACLU[1] has this to say:

> Instances of institutions and individuals claiming a right to discriminate in the name of religion are not new. In the 1960s, we saw objections to laws requiring integration in restaurants because of sincerely held beliefs that God wanted the races to be separate. We saw religiously affiliated universities refuse to admit students who engaged in interracial dating. In those cases, we recognized that requiring integration was not about violating religious liberty; it was about ensuring fairness. It is no different today.

[1] https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=55...

[2] https://www.aclu.org/issues/religious-liberty/using-religion...


This is the paradox of tolerance, unlimited tolerance leads to the disappearance of tolerance. A tolerant society must draw lines or end up overrun. Taleb has an interesting perspective on it as well: https://medium.com/incerto/the-most-intolerant-wins-the-dict...


Great article, the %3 rule explains alot of the dysfunction in US politics. With fringe actors on the right(tea party, Qanon) and left(AOC, Antifa). The need for solidarity in either party allows for embrace of ideas/ideals the majority would reject if not for the need to "band together on party lines".


This is a bad faith argument, or at least just a bad argument in general. Denouncing transphobia has absolutely nothing to do with said transphobe's religion and everything to do with denoucning hateful rhetoric. It's a leap in logic to assume otherwise.

What's that common Christian refrain? Love the sinner, hate the sin? By your same logic, "love the sinner, hate the sin" is hate speech despite not attacking the person, but their harmful views and actions.


Not to be tHaT gUy, but Christians aren't supposed to hate the sin in others. It's not our business, but God's.

>Do not judge, and you will not be judged. Do not condemn, and you will not be condemned. Forgive, and you will be forgiven.

https://www.biblegateway.com/passage/?search=Luke+6%3A37-42

Edited to add: I don't think there should be a religious exemption to hate speech.


So, what's the end game here? Can I not ever talk about my opinion to anybody? Can I share it with my wife? Can my wife look at what's going on in the world and disapprove, or is that not allowed? Are people who believe as I do simply to not exist? Should we be exterminated? Should our lives simply be made so miserable that we'd rather we were killed in Gulags? Exactly what are you attempting to accomplish here, because right now, I am not allowed to talk with people on facebook (group bans); I can't talk on reddit (subreddit bans); I can't talk in church (lockdowns). As we saw with Parler (which we were promised was safe, because now the central authority was not someone who didn't like me), I also can't make my own app or service with my tech skills, because they'll be banned too. I can talk with my wife and family, so I guess I'll just keep on breeding to make more people to talk to, until you decide that the school system ought to be used to take my children, nieces, and nephews away, as we've seen in other western countries already, or until we're not allowed to have kids, as has been proposed by some journals and implemented in other countries (to much applause, mind you).

That sounds like a really great world you have planned for us. I am so glad my parents fled a third world country where our ethnic group made us be treated like trash to escape to this! This is exactly what they had in mind when they left :)

I guess I just don't really get it. I've accepted that my ideology has lost; all I ask is to be left alone to work, garden, own my home, have my children, raise my children, and let them do the same. I have already unregistered to vote, and have no more interest in politics, other than to complain about it online. Am I allowed to do this? What more must I do to not be considered an evil monster? Honestly, my treatment on Hacker News over the past year has been worse than the combined effect of racism growing up brown in a mostly white neighborhood during 9/11. This is insanity at every level.


Oh got it, the endgame is to downvote me into oblivion! How incredibly, incredibly childish. This country really needs to improve at the most basic human levels.


> Although parlor was only used for hate

> Right. I used parler to do such things as <hate>. I stopped using Twitter for <hate> because I figured that ... it was only a matter of time before they were banned (for <hate>)

I know you don't think it's hate. I know you think trans rights are a subjective ethical issue.

But what if you're wrong? What if it's an objectively human rights issue? Like sexuality, and race?

I know there is still an ongoing "cultural war" about these subjects in the more conservative western nations like the US and the UK. But on every socially progressive position, for the last few decades, countries like The Netherlands, Sweden, and Canada, lead the way, and the US/UK eventually follows.

So my question is genuine - do you genuinely believe that you will come out ahead on this issue on a historical timeline? That the trans discussion will be nothing but a passing fad?


I am a brown man who has been subject to many racist attacks. After 9/11 my elementary school classmates called me a terrorist. My father and I (when I was a young boy) was chased out of a car dealership by a mad salesman yelling strawberry picker at us. Walking through the downtown of my city with my white wife, I have been harassed by people who don't like we are in an interracial marriage.

I still do not believe that racism or incidents of racism, including the ones I described above, are 'objectively' human rights issues. They are clearly quite subjective, since they only really exist in my experience. My religion (which most would also describe as subjective) tells me that these experiences have absolute moral character, but barring my reference to that, I cannot say for sure whether or not such discourse is objectively wrong.

I especially have trouble labeling the discourse as 'objectively' a human rights violation when none of the behavior I described above has come from anyone in any position to have power over my life or my rights.

Honestly, to continue to harp on those individuals that have mistreated me would be incredibly childish.


Do you think your personal subjective experience with tolerating racism is therefore broadly applicable to other people's subjective experience with racism?

I'm probably misunderstanding but your post reads as "I don't call personal racist discrimination I faced hatred or racism, therefore my own personal discrimination of transgender people is not hatred or <ism>"...


Sexual mores have been fluid throughout history. There's no particular reason to think one side is more on the "right side" than the other from an absolute sense, looking at the historical track record.

For that matter, the notion of human rights as we conceive of them is a relatively modern invention. Liberalism isn't even 400 years old yet (and the notion of universal rule of law as codified in the Magna Carta is itself only just past 800). By comparison, Ancient Egypt lasted for around 3,000 years, and Rome for 1,200. We've got a while yet before we can declare our way is any more successful than theirs.

Particularly if someone believes that it's a religious issue dictated by a deity who takes an active interest in the world, then they'll definitely believe they're on the right side of history.


Or you could argue that as long as no one is getting hurt what is ‘right’ or ‘wrong’ for other people is none of your business.


Transgenderism is not a sexual more, though.

Is the trend of human history on the scale of thousands of years old relevant? Is that no different than saying "It's pointless to worry about climate change because the sun will explode and then there will be the heat death of the universe?"

Surely there is some evidence that society and humanity is "stabilizing" to a certain plateau of morality and ethics not driven by religion but driven by humanism?

Lastly, "Particularly if someone believes that it's a religious issue dictated by a deity who takes an active interest in the world, then they'll definitely believe they're on the right side of history." is exactly why I ask!

Because the perspective of what is acceptable from these religious institutions is rapidly changing!

50 years ago a Christian church that is "gay-friendly" would be inconceivable. Today there are many. Of course there are those that would dismiss those out of hand, but by default the ethical positions of groups that supposedly represent eternal immortal deities is shifting - from evolution, to birth control, to homosexuality.

I'm genuinely wondering if people believe that transgender acceptance is somehow unique.


> Is the trend of human history on the scale of thousands of years old relevant?

Yeah, because...

> Surely there is some evidence that society and humanity is "stabilizing" to a certain plateau of morality and ethics not driven by religion but driven by humanism?

My point in my previous post is that I don't think we can make that claim yet because we don't have enough data. Surely the ancient Romans and Egyptians would have believed the same, yet here we are.


Humanism is one world view. One that you can build a moral framework around. Much the same way religious people build a world view based on their religious beliefs.

It's hard to make an argument that one world view is better than another. Because to do so you have to agree how to measure the outcomes of world views. Often this is impossible as the world views conflict.

The makes most assessments of societies progress subjective.


You realize trans people are going to go extinct right?

It should be clear as day to anyone with even a passing interest in biology. Once we've identified exactly what genes influence transgenderism, and genetic editing of our embryos is commonplace for the populace, no one is going to allow their children to be born with a predilection for being transgender. You'd be literally putting your child at an enormous disadvantage for no discernable reason.

Look how long its taken for gay people to be accepted by society. Not decades, but centuries. Transgender people don't have that long. In a century, we'll be editing genetically superior superbabies. No parents in their right mind would give their children such a tremendous disadvantage in life when they could make them "normal". I suspect gay people will probably go extinct too, but one never knows with 100% certainty how these things play out.

I suspect a great many classes of people / traits will go extinct. Down's Syndrome babies will certainly be screened out of the population, because again... in a society of beautiful 200 IQ super athletes, why would you deliberately hinder your child?


You vastly underestimate the complexity of the human genome and also assume transsexuality is _all genetics. Ontogenesis, epigenetics, environmental factors, ... there is a lot more on sex variant hardware front than genetics.

Sex is bimodal, by definition parents can't define a "right body" in this regard.

And anyway, at that point of understanding and manipulating human biology, "people" will likely moved beyond limiting categories such as 'sex'.


> You vastly underestimate the complexity of the human genome

We mapped it out 20+ years ago. We learn more about it each year, but what's more, the extent of our knowledge increases exponentially, not linearly.

> Ontogenesis, epigenetics, environmental factors, ... there is a lot more on sex variant hardware front than genetics.

You act like these are factors that can't be controlled, if not entirely, at least to some degree.

> at that point of understanding and manipulating human biology

We are very nearly at that point. Within anywhere from 20-30 years, maybe even less. If you think "people" will have moved beyond limiting categories in that time, you're not thinking clearly about the past, but what's worse, you're not thinking clearly about humanity.

If you think we'll end up in a Star Trek-like world before we have the ability to successfully manipulate the genome, you can just look to the fictional history of Trek for that... we were supposed to be in the middle of the Eugenics Wars right about now. Turns out they'll come later, I guess.


Mapping DNA is not understanding genes. We know very, very little about the genome. In fact we're still discovering new mechanisms of encoding and meta-encoding. We can't even say where "genes" begin and where they end.

We identified a few single genes in _association with certain risks, but literally nothing is understood in complex interactions and how environment and genetics make for the concrete phenotype.

Transsexuality is related to brain anatomy, which... is about the most obvious complex system in the body...

Have you had any introduction to bioinformatics at all? Your enthusiastic futurism is uncalled for.

We also thought to be at the edge of AI in the 50s, thought the mystery of the brain was about to be illuminated with the introduction of fMRI and weather prediction is only marginally better at what farmers were capable of for thousands of years. Nobody can extensively model even a single cell today. Do you know how prostate cancer cells ultimately always evade androgen deprivation therapy? Me neither. Nobody does. All we are celebrating is "this transporter may play a role" types of "breakthrough", achieved by extensive experimentation.

Humans tend underestimate chaotic complexity...

And again, what are you even getting at? To understand and manipulate transsexuality completely, you need to understand and manipulate sex expression completely. It's pretty telling that you assume "parents" will chose to eliminate transsexuality exclusively. Why wouldn't you rather have intersex humans? There is no inherent value to sexual dimorphism and the sexual evolution likely isn't related to survival, but rather costly (see birds).

Snails do fine.

Clown fish do fine.

Sex is overrated.

You just want to construct a reason to be edgy on transsexuality... don't you?


So what you're saying is that people are grooming young boys and girls to be transgender?

Sounds pretty bigoted, bro


Haven’t you seen Gattaca? There will be plenty of babies born to parents who can’t afford to genetically modify their children. We can’t even get insulin to the people who need it today.


> There will be plenty of babies born to parents who can’t afford to genetically modify their children.

Yes, and how far behind will they be left?? This is one of my primary concerns.

GATTACA tells the story of a single individual with Herculean willpower. That's simply not most of society. For every one Vincent Freemans, there's 999,999 Carls from The Breakfast Club.


Oh good, eugenics.

> Look how long its taken for gay people to be accepted by society. Not decades, but centuries. Transgender people don't have that long.

You might wanna read up on actual history before winging it. We've been around since time immemorial. Nothing new here, we've been variously celebrated, tolerated, and eradicated by societies throughout history. The last big purge was during the holocaust. Is that the side of history you wanna be on? Oh right, the eugenics thing. I shouldn't presume.

https://en.m.wikipedia.org/wiki/Transgender_history


I've written before that there'll be a second eugenics movement. It'll likely happen when we have more knowledge about the human genome and a very easy way to edit it.

You're also assuming because I think a thing will happen, that I want it to happen. There are lots of things that I don't want to happen, that I'm almost certain will happen.


>> Although parlor was only used for hate

>I encourage everyone to keep an eye out for these casual little calumnies, presented without evidence.

Yea. I had a parlor account. It's news to me it was used for hate!

I posted to Parler and Twitter at the same time manually. I was DM'ed by a Pakistani man on Parler that I eventually hired for contract work and he's been great, I had no idea he was a hateful person either.


It's impossible to believe that Facebook, Twitter, iPhones, and Google weren't in major ways used by the perpetuators and organizers of the assault on the capital to facilitate their criminal conspiracy and actions.

Selective enforcement of rules is being used to against political positions and, let's face it, competing platforms.

>Although parlor was only used for hate,

I don't think that is in evidence.


Site A has a TOS that bans hate speech

Site B is just like Site A, but with less reach. However, their TOS allows hate speech as long as it's not tortious

So on Site A a comment spamming nig**r will get you banned, and on Site B it is a-ok as long as it doesn't mention anyone's name.

-

Site B should be expected to have more hate speech.

Additionally even if Site A gets hate speech, it's providers are more likely to give it leeway since it's not by design that hate speech appeared.

Meanwhile Site B, by the very virtue of it's design becomes a lightning rod for hate speech not allowed on Site A, and it is understandable providers are not on board with it.


Parler does ban hate speech within the rule banning "Any direct and very personal insult with the intention of stirring and upsetting the recipient".

https://www.techdirt.com/articles/20200625/16303844790/just-...


Parler does not ban hate speech.

Parler bans "tortious" speech, meaning speech that would open someone up to a civil tort, think defamation.

In fact, Parler later switched to just using the simplified wording of "we won't be used as a tool in a civil tort"

-

My example is not tortious, it is not targeting a person in a direct and personal manner.

There is _a lot_ of hate speech that is not tortious, and does not fall under "direct and very personal"

Spamming nig**r in a public forum is upsetting to some readers, but not covered under the fighting words doctrine they tried to invoke nor their guidelines.

Saying "I hate these damn <insert race or religion> they should all go back home" is the same.

-

And like I mentioned above, your article is out of date, their latest guidelines before going down no longer even bothered with breaking this out and all of what was quoted no longer existed.

Instead it was all replaced with

> Parler will not knowingly allow itself to be used as a tool for crime, civil torts, or other unlawful acts.

You can confirm this by googling this URL (which us obviously down if you try to visit it directly) "https://legal.parler.com/documents/guidelines.pdf"

The latest revision was on 12/04


Anytime you look at the trending hashtags on Twitter they are always divisive and hatred. Even when unify was trending, the top twitters and top comments are the ones against unification, like "we don't unify with nazis".


What's considered a Nazi just keeps gradually expanding. Twitter seems at that point where any white male is a Nazi, and any woman that loses weight is enabling the white males so are therefore Nazis as well.

I'm curious to see what happens when Biden refuses to meet with any of the BLM radicals and the more senior Democratic members of congress reign in the young ones like AOC, these people have outlived their usefulness now that Trump's out 3and I don't think they're going to like what comes next.


I think Facebook should be broken up but I havent heard any good ideas yet regarding how to somehow regulate social networks.

Beyond hate speech, I dont see any reason to limit what people can say. Somehow limiting how the network connects people, or how content propagates, seems like the key. Only FB has done the social experiments at scale to know how to engineer this, which is faintly ironic.


"Beyond hate speech" lol. That's kinda the problem, isn't it? Who gets to define "hate"? And why ban speech that is actually legal (even if highly offensive) anyway? And if it's worthy of banning on a major platform, why not just make it explicitly illegal? These are serious and important questions (among others) that seem to get conveniently glossed over.


Pick a point and draw a line in the sand. Then enforce it.

So many HN replies amount to "we all agree this is a problem, but we can't fix the entire problem perfectly, and it has some hypothetical drawbacks, so we shouldn't even try."

(Never mind that as a result of inaction in the face of disinformation and hate speech our societies are rotting from the inside, and many, many real-world atrocities are being carried out as a direct result.)

This is, by the way, a fundamentally conservative viewpoint. Cf. gun violence, homelessness, living wage, etc. Just because something is a complex issue with imperfect solutions doesn't mean we have permission to do nothing.


I tend to agree with this, but given we're discussing Myanmar here I think it's worth adding that knowing where to draw the line can get a lot more complex than deciding 'Hang Mike Pence' crosses it.

Myanmar's language and culture are completely alien to people drafting Facebook policies, driving forces behind intercommunity violence include things like [likely at least partially true] news reports of other intercommunity violence and official government statements, and then there's nuances like Burmese people seemingly accepting the false claim the ethnically-cleansed Rohingya actually were Bangladeshi regardless of where they stand on other things, and the outpouring of support for Aung Sung Suu Kyi after Western criticism that might have been signals that they believed the conflict was the generals' doing rather than hers or might have been mass endorsement of the government's violence. I suspect my Myanmar-based Facebook friends' one or two allusions to burning villages and politicians are probably calls for peace and meditation, but honestly, I don't know.


The other side is facebook shouldn't offer a service to a country/people it can't support.


Agreed. There would be a lot of benefit in countries having their own local services that understand their culture better.


> Burmese people seemingly accepting the false claim the ethnically-cleansed Rohingya actually were Bangladeshi regardless of where they stand on other things

That was largely a result of campaigning against giving rights to the Rohingya.

> the outpouring of support for Aung Sung Suu Kyi after Western criticism that might have been signals that they believed the conflict was the generals' doing rather than hers or might have been mass endorsement of the government's violence

Yeah, because Aung Sung Suu Kyi keeps denying, on live TV, that any problem exist other than the insurrectionists are responsible for everything thats happened thus far. The insurrectionists/terrorist according to her are composed of muslim Rohingya that are financed by foreign "Muslim" powers.

The matter of the fact is that most power is held by the military, NOT Aung Sung Suu Kyi. Thus, Aung Sung Suu Kyi stance on this issue is probably a result of the military's position. At any moment, the army can choose to remove her from power. Her position is that fragile.


But now you're back to square one - who defines "hate"? That's the line you're talking about. Keep in mind mind that in many cases, some speech you consider "hate" is totally vague, and opinions will inevitably just fall along convenient ideological lines. SO, outside of some really explicit cases, it's really not definable at all.


Ideally, the definition of hate will result from a complex negotiation between stakeholders in society, just as we draw a line on who counts as an "adult", what counts as "self defense", and what counts as "libel".

The definition will be less than ideal, open to abuse, and problematic, but having it is better than not, just as having a definition of "adult", "self defense", and "libel" are better than not having them.


Why do we even need to define this at all???? We're never going to. Again, the US functioned just fine allowing "hate speech" to be legal. Would there be consequences to such speech? Sure, and deservedly. But the government couldn't do anything to you, that's the point. And I would strongly argue that Facebook, Twitter, et al have de facto replaced the government - they are a new governance for society whether we like it or not.

And why would we engage in a "complex negotiation" (that sounds to me like a euphemism for right/left extremists and massive unprecedented violence) to reach a "less than ideal" outcome over an issue that 250 years of history prove is not only unnecessary but in all likelihood extraordinarily dangerous??? Again, why was free speech the first one???? It f-ing works, that's why. It's the foundation for the best governance human beings have ever achieved. To have people now in 2021 just kinda shrug about its importance is mind-blowing to me.


Unnecessary? I think non-whites would take issue with that. Race hatred and violence certainly resulted in "extraordinarily dangerous" outcomes for them over the last 250 years.

In any event, the Internet changed things. Before, it was very difficult to light the entire country on fire.

With the Internet, and social networks in particular, that has changed. The reach is unprecedented; it is a difference of kind, not just magnitude. Anyone can reach very specific groups of people and incite hatred and violence through targeted propaganda campaigns. There's more than one reason rhetoric and mass violence has increased since the late '90s (on a national scale), but I believe this is one of the biggest contributors.


Why does there need to be a single definition? You can have a different definition of what hate is compared to me. For example, you can choose not to associate with someone because you think they're hateful, where as I find them just find and we're buddies.

Facebook is influential yes, but they are still one private organization of many. Why do we need a consistent definition of hate speech between facebook, twitter, reddit etc?


The big problem is that if you draw a line, everyone is going to toe it and try to push past it. Trump has shown that he is willing to push the boundaries of what is acceptable his entire presidency.

That’s the “slippery slope” argument. If you define what’s allowed, people will ask for more, and others will push past it saying it’s not much different than previous.

And besides that, the line has been drawn many times by the Supreme Court. Hate speech is allowed by the First Amendment, but inciting violence may not be. There’s “tests” for these sorts of issues that lower courts are supposed to apply.


Yes it’s a very difficult problem and one with bad solutions (see fictitious chatter below):

I like Indian curry.

Yes, Indian curry is the best curry.

Yes, Indian curry in Japan is not the same. They don’t know how to make it.

Yes, Indians are better at making curry. After all it was invented in India and we have the local spices.

Definitely, I would not buy curry inJapan unless it’s made by an Indian who knows how to make it.

I mean you can see this potentially going in an unwanted direction. But is it hate?

To me hate is when you have immediate violent outcomes in mind.


"In mind" - what does this mean? Who exactly gets to define "in mind"? Like, you're now literally reading people's minds about their intentions? So I don't need to explicitly tell people to go do bad things, I can just say I'm upset about something and that's enough? I can make some bogus claim and that's enough? Because there aren't that many people spewing bullshit all day long on twitter? And let me take a wild guess on what your conclusion will be for people who say vague things on "your" side vs the "other" side...


That’s the difficulty, isn’t it? If I say, let’s go and burn that police station down now! Let’s be there at 9:15. It’s pretty clear. If you say, I wish that police station would get burned down. I don’t know. It’s not a nice thought, but you’re not actively working to have it burned down. On the other hand the wish does not have pure intentions.

This is why I think it’s impossible to “monitor” and purge or ban violent tweets or what have you.

One, what is the intention of the speaker, two who is responsible for the audiences reaction? Do you take the most extremely negative interpretation? It cannot work.

And that’s not even taking slang into account where violent words don’t mean violence and normal words can take on other meanings.


Context matters.

"Let’s go and burn that police station down now"

If anyone I knew said that I would treat it as a dry joke.


I guess this is the problem in identifying “shooters”.

Lots of people kid around about horrible things but don’t mean it. One day you find out that one of the kids was not joking and actually meant it. Even people who are friends of the perp have problems sussing this out and can only do it in retrospect.


What about songs like kill the police by ice tea. A direct call to action exists / police have been killed but he still is free to act in police dramas?


If we took “progressive” politics out of it but retained that we need to remove “violence” from discourse, yes they would have to be banned from Twitter.

Progressive politics will say we have to consider the background of the singer and the audience as mitigating circumstance so they might let it slide.

Objectively it would have to be banned if we use Twitter’s new rubric.


(fyi the song is Cop Killer by Ice-T.)


Endlessly peppering people with questions is sometimes called Sealioning. How about engaging with the point(s) made by the person you're replying to, and offering your own suggestions for how you'd go about things. Also, the HN guidelines (link at the bottom of the page) encourage you to respond to the strongest possible interpretation of what was said, rather than the worst.


But that's the ENTIRE point here! This supposed itty-bitty exception - "Hey, just no bad stuff, okay?" - is actually EVERYTHING. You say something like "Just no hate speech" or "Only if they have violence in mind" and those statements inherently violate the very notion of free speech the people are erroneously saying they're in favor of, and those innocent-sounding qualifiers are why things are devolving so rapidly. We've taken a simple concept that worked brilliantly for 250 years (free speech) and in the blink of an eye, now that online life has (for better or worse) usurped the government's role in setting rules for society, we're just rearranging how the game is played. This is the broader issue, online life (for lack of a better phrase, but I hope you get my point) has become so ubiquitous that it's like some sort of alternate society with a new governance. We wouldn't/couldn't insert these qualifiers into the constitution, but here we are giddily doing it for our new alternate world.


By all means offer your own proposals, ideally taking account of widely known priors such as the events of last Wednesday.


That's not what sealioning is.


http://wondermark.com/1k62/

Reasonable people can disagree


From Wikipedia

> Sealioning (also spelled sea-lioning and sea lioning) is a type of trolling or harassment that consists of pursuing people with persistent requests for evidence or repeated questions, while maintaining a pretense of civility and sincerity. It may take the form of "incessant, bad-faith invitations to engage in debate".

One post is not it.


It's not just one post. The same person posed something like 15 different questions in the thread. Soliciting clarity on another's views via questioning is a valid kind of discourse, but Socrates also took time to listen to answers and offer his own views rather than throwing out 5 questions in a row.

Also, if you read that Wikipedia article to the end you'll realize that I pointed you to the original use of the term.


No he didn’t. He just made that one post with the questions and you randomly accused him of sealioning. Just asking questions is not sealioning. I know where the term originated.


That's simply false. Anyone who looks back up the subthread can see for themselves that there are 9 posts from the same person and most of them are rhetorical questions. I don't know who you think you're going to convince by making such counterfactual statements, but I see no point in further discussion. Bye.


For real though, if you haven't tried Japanese curry, try to find some. It's really tasty (sweeter and not spicy) and not at all like Indian curry.

(Indian curry is good too but most people have access to try it)


Ha! I like Japanese curry too. But I meant Indian curry prepared in Japan. I did not mean Japanese curry.

My point was that people can have discussions about who or what is better and it can diverge into other areas that could be considered hate by very sensitive people. Japan notoriously claimed for example that their snow is unique[1] and thus made it difficult to import European skis some time ago. Today this would be viewed and xenophobic or something when all it was was protectionism.

[1] kinda true. It’s ‘wetter’ than snow in some other places, but doesn’t mean Rosignol should not be sold in Japan.


What's your point? I think the line is drawn when people are put at risk of life or liberty or have died/suffered. At that point, warning then excising those users and groups, reporting them to authorities (if crimes have been committed) is what's needed. And it needs to be timely.

As I understand it Facebook, if it did anything at all, did id months/years after the fact, essentially doing CYA not anything meaningful.


>Who gets to define "hate"?

I really dislike this argument. A lot of democratic countries have defined hate speech. In the US, individual companies define it and moderate it as they see fit. In other countries, their legislators and courts define it. The US has defined lots of difficult terms to define already.

>And why ban speech that is actually legal (even if highly offensive) anyway?

I mean if we're talking about why companies should ban it, I'm sure they have a variety of reasons that range from bad PR and loss of revenue, to the founders/owners/employees don't want to build a product that's used for violence.

If we're talking about at the society level, because it threatens democracy, peace, individuals safety

>And if it's worthy of banning on a major platform, why not just make it explicitly illegal?

A lot of democratic countries already have. UK, Canada, France, Germany, New Zealand, Sweden, Switzerland (and yes, they managed to define it somehow): https://en.wikipedia.org/wiki/Hate_speech_laws_by_country


>A lot of democratic countries have defined hate speech.

They have very poor subjective definitions that boil down to anything any group considers offensive which is a moving needle, and that make make things like satire and certain forms of comedy illegal and have a chilling effect on valid criticism. It's also compelled speech, and in some cases leaves violence as the only resort, as opposed to a conversation and de-conversion from extremist beliefs. We've seen abuse of it in several cases, if no one is offended, they'll create a group or pay someone to be offended. There's no burden of proof beyond someone's emotional state upon hearing the words. You can say something to two people of a group, one might think its funny and laugh, another might report you and call the cops.

Also, turns out you can yell fire in a theater, especially if you at least believe it to be true, and that's something courts can't determine, lots of people say things that they believe to be true, that turn out to be false. Likewise, if speech is dangerous but true, its still should be protected.

In either case, best this be settled in courts and legislation, not corporate meeting rooms that are echo chambers of opinion.

>If we're talking about at the society level, because it threatens democracy, peace, individuals safety

We already have laws against that, its why hate speech laws are usually redundant and likely to be abused and scope creep into silencing valid criticism of an individual or group of individuals who can then be offended and have you arrested, at the very least putting you through months and years of legal trouble before you're acquitted, and that's only if you can afford proper defense.

You people need to look at history. It boggles me how uneducated people are today on the context of this issue.

Even the person that did the Parlor leak is a Meiklejohnian absolutist.


>In either case, best this be settled in courts and legislation, not corporate meeting rooms that are echo chambers of opinion.

I'm really confused by what your points is. You spend most of your post talking about how laws against hate speech are ineffectual because courts can't determine a persons emotions and beliefs, and then say it's best handled by laws and courts.


> You people need to look at history.

Yeah, Weimar Germany could teach you some lessons, apparently.


You talk a lot about theoretical outcomes, but isn’t it reasonable to look at what different systems result in empirically? Which of the aforementioned countries had a violent assault on their seat of government in recent times? Does that support your argument, or maybe there are virtues to those alternative legal frameworks?


Uh, yes? The yellow vest protests: https://www.theguardian.com/world/2018/nov/24/french-gilets-...

Some of the BLM counter-protests in the UK also became rather heated: https://www.politico.eu/article/johnson-condemns-racist-thug...

Of course the Belgium bombing was a political attack: https://www.nytimes.com/2020/11/27/world/europe/iran-disside...

Are you suggesting only the US has political violence?


If this speech threatens democracy, peace, and safety, why hasn't it been made illegal by the government, and how has the US managed to do so well for centuries with it being perfectly legal? Odd, the speech you're talking about being legal hasn't seemed to affect things much at all. Actually, we've done nothing but thrive despite hate speech being legal. Why haven't people been clamoring for decades to change the constitution because of all the mayhem caused by hate speech?


>If this speech threatens democracy, peace, and safety, why hasn't it been made illegal by the government

Because of the first amendment, and how the courts have very consistently interpreted to allow hate speech.

>and how has the US managed to do so well for centuries with it being perfectly legal?

If by "do so well for centuries" you mean the US's economic output and world status over the centuries, I would argue that profiting off of Europe rebuilding itself after two world wars probably outweighed the detrimental effects of hate speech (among probably several dozen other reasons).

If you mean "how has the US done so well handling the negative effects of hate speech for centuries without making it illegal", I would argue that hate speech has contributed to some of the most shameful and barbaric social dynamics over the centuries, and the US is historically well behind other modern countries on this front.


I've noticed a high correlation between people who callously want to ban all sorts of speech and people who just seem completely miserable and think the US is the most awful place on earth.


Are you referring to me? I don't think any of that reflects my opinions.


I would assume this is because you are a WASP with both high income and privilege levels and have never really experienced the fallout of hate speech directed towards you.


`White Anglo-Saxon Protestants`

Maliciously categorizing someone based on skin color, race, income, social status and religion in an attempt to denounce hate speech.

Poetic


You couldn't help illustrate my point any better, thanks. My opinion is invalidated by my racial/economic identity. Actually, it's worse: the opposite of what I say must be true. Brave new world.


>If this speech threatens democracy, peace, and safety, why hasn't it been made illegal by the government, and how has the US managed to do so well for centuries with it being perfectly legal?

Because we've generally dealt with that by deplatforming that sort of speech socially and/or in the private sector.

The history of the civil rights movement is filled with boycotts and other sorts of social pressure campaigns.


>how has the US managed to do so well for centuries

If you were a minority, it really has not done well at all.


You don't get to enter into debate and get away with 'disliking' other's argumentation!


That's typically the reason why I get into a debate.


Go pass the megaphone to your friends then


How would you define calling for the genocide of the Rohingya people? Would you define that as hate speech? This is an important question for right wingers to answer. Should we be tolerant of your intolerance?


lots of democratic countries have come up with workable definitions. I propose to make hate speech illegal, so banning it won't be in any way contradictory. I am not glossing over anything.


Do you have an example of a workable definition?


> why not just make it explicitly illegal

That violates freedom of speech


Wow it's almost like the the founding fathers had a good idea when they drafted the bill of rights and the literal first thing that came to mind was freedom of speech.

I'm sure this experiment of ripping up the constitution in our de facto new online lives that usurp what the government is capable of doing will go just fine...


> I think Facebook should be broken up but I havent heard any good ideas yet regarding how to somehow regulate social networks.

I think a better idea is to force them to provide an API for interoperability, alternative clients and exporting the user data. Together with the federated social networks, it will change everything: https://the-federation.info.


I think treating Facebook like a public utility and requiring federation is akin to breaking up Ma Bell(the telephone monopoly, see: https://www.investopedia.com/ask/answers/09/att-breakup-spin...). The cost to society of phone calls dropped dramatically. It took a long time to break up the monopoly since the government benefited (according to https://talkingpointz.com/the-breakup-of-ma-bell/).


I don't know if this is the "right" thing to do, but I am certainly in favor of hearing more future-learning solutions versus pre-internet solutions that merely introduce friction to slow growth.


Cambridge Analytica happened because of an API and users who will click anything to learn if they're a Samantha or a Carrie.


Does it mean we should keep technology backwards because a stupid minority will misuse it? In this case should we also ban cash machines, since idiots may withdraw all their cash & throw it in the wind?


> stupid minority

It's a stupid majority.


No, it's not because of an API, it's because of the monopoly of Facebook. Without the monopoly they would value security of their users and would be afraid to loose them.


Cambridge Analytica happened because the same folks who fund Steve Bannon and Parler wanted it to happen.


The best argument against this is that it means your data will potentially end up splatted across even more more different servers controlled by various unaccountable parties.


How? Your data will only be on server(s) which you choose. I did not suggest that everyone can download everyone's data.


If I want to chat and share posts with 3 friends who use 3 different federated nodes, my details end up on all 3 of those nodes.

Consider email as an example of a federated network: My name, email address, and text of my emails exist today on hundreds (thousands?) of different servers. If I decided I want to "delete" my name from the global email network, that's close to impossible.

The same would be true of a federated social network, but with a more intrusive and personal set of data. Users will not understand that they cannot delete their own mastodon toots (or whatever it calls them).


> If I want to chat and share posts with 3 friends who use 3 different federated nodes, my details end up on all 3 of those nodes.

This is not how federation works. You can talk to any other node without having an account there.

Sure, your private messages may be delivered to your friends' nodes, but is this a big problem?

Toots are public by design. Everyone understands that and it has nothing to do with privacy.


> Sure, your private messages may be delivered to your friends' nodes, but is this a big problem?

It kinda depends on the content of those messages, doesn't it? In the context of this OP, if my messages are organising a genocide then yes it might be a pretty fricking big problem.

> Toots are public by design. Everyone understands that and it has nothing to do with privacy.

Says you. Most users will have been accustomed to centralized services where it is possible to delete a tweet and have it gone, unless somebody else has gone to special effort to archive it or screenshot or what-have-you.


Preventing them from promoting content for engagement's sake would be a good first step.

It's one thing to be a neutral platform and allow people to say what they want (even if it sometimes means things that shouldn't be said), but at least the platform here is neutral and doesn't directly profit off the bad content (or at least doesn't profit off it any more than it profits off "good" content).

It's a totally different thing where the platform starts acting as a publisher and starts promoting the bad content (since divisive/outrageous content generates the most "engagement"). Here, the platform directly profits off bad content and should be held liable for it; you shouldn't be able to have your cake and eat it too.


At a certain level of daily MAU, (say 100k?), every single post most be manually checked before becoming available to the public. They should be checked for a) child porn b) violence c) doxxing

They are free to run algos to auto reject, sort whatever order.

This would likely be too burdensome, and the socials would almost certainly have to start charging for access, which in of itself would probably remove much of the problem.


> I havent heard any good ideas yet regarding how to somehow regulate social networks.

* Break them up, and keep them small enough so that no one site can do that much damage.

* Remove safe harbor provisions on algorithmic content feeds.

* Pass a transaction tax on ad auctions to discourage free-for-attention business models.


I recall when I was going through a bout of insomnia in college, being upset by the quality of the ads on late night TV. I had been taught about what was then known as targeted advertising, and I thought that meant that all those inane ads were being specifically targeted at insomniacs. Like they had decided I was some sort of moron who would fall for this stuff. Didn't take me too long to realize the low quality ads were simply targeting the low-quality time slots. Nothing more sinister than cheap people being cheap.

But with real-time auctions, though, there are whole swaths of people whom the quality ads have decided are a waste of time, and now the hyenas can circle in to prey on everything that is left.

There are some ad networks out there that try to avoid the real time auction and cross-site tracking aspects of these conventional ad networks. I don't know how successful they're being, but maybe they're on to something. Something that should be encourage by public policy.


Could you clarify a bit more on these proposals?

> * Break them up... How do you ensure that all companies in a vertical will remain small? What would the legislation look like? The internet certainly enables more 'winner takes all' type markets, so not sure how you consistently keep social networks small when everyone will naturally gravitate to those that have the best UX + network effects (there are surely other variables, just listing 2 as an example)

> * Remove safe harbor provisions... What does this mean? How are algorithmic content feeds currently treated relative to non-algorithmic content feeds? How would you target the legislation to not have externalities?

> * Pass a transaction tax Never heard of this, what does this mean? Who is getting taxed? Advertisers? Publishers? Website visitors? How will your proposal change incentives? What's your take on how smaller news sites are dependent on the ad model and that ad models promote websites that are open to more people (as opposed to subscription businesses which incentivize optimizing for your paying customers and giving them exactly what they want)


1. Not really sure I agree about "winner takes all" - Facebook bought IG, Whatsapp and dozens of other companies. That's not winning based off product superiority - it's winning based off access to financing. There are plenty of ways to stop companies from ever reaching that size - Limit M&A, limit the amount of hot money used to finance growth ahead of revenue, and impose stricter regulations on larger companies - I'm talking investigating violations and prosecuting civilly and criminally - no compliance BS.

2. I'm not going to explain the Safe Harbor provision to you. Look it up. We already have externalities - removing algorithmic feeds would take (e.g. Facebook) back to where it was in the late 2000s, where users were in control of the content they saw. The only negative externality is that Facebook and other companies would make less money of targeted ads.

3. You've never heard of a transaction tax? I find that hard to believe. Why don't you think about other times you've paid tax on a transaction, and then maybe you can come up with answers to the rest of your questions.


1. Facebook already had 1 billion MAU in 2012, the year it acquired Instagram. How would you limit M&A? Won't that really hurt the startup ecosystem if you use the state to close off one of the paths to an exit? How do you propose limiting 'hot money' to finance growth ahead of revenue? Are you proposing getting rid of venture capital?

2. I'm familiar with safe harbor provisions, but am asking you for more specifics. Many sites/app personalize content (aka algorithmic content feeds) for the user, based off of what will drive engagement. I understand you dislike Facebook, but if you attack Facebook for personalization, how do you not hurt everybody else that is trying to cater experiences to what their users are likely to want?

3. What's the transaction in your proposal? Who is getting taxed? There are multiple actors in the advertising system, from advertisers to ad networks to publishers to website visitors. Your glib response is a signal to me that perhaps you haven't had much in the weeds exposure to how advertising functions, and may be operating from more of a mainstream mood affiliation perspective. Which is fine, but you answered a question to someone's query for 'good ideas' for regulation.


1. Without serious competition we will never know if their waning mainstream popularity would have diminished their financial success.

Limit M&A by denying acquisitions that would consolidate markets, like we did for a long time in this country. You don’t have to hurt startups if you have a little bit of creativity. Create different tiers of review, etc.

You cut down on hot money by reinstating the firewall between commercial banking/insurance and investment banking. It’s not getting rid of venture capital, but it does restrict the amount of money that can be levered by them.

2. Im not worried about hurting businesses that focus on algorithmic content. If their algorithms are feeding right wing propaganda to impressionable people I want them to go out of business.

3. An ad auction is no different than any other financial contract that gets sold on an exchange. The buyer sells a contract for placement, which an advertiser buys in exchange for a promise to pay. You levy a tax on this transaction to make the overall cost of doing it more expensive and to offset the cost of externalities created. I’ve thought through it plenty, but since it doesn’t feel like you aren’t asking questions in good faith or doing the least bit of research, I’m not going to bother taking you particularly serious.


2. You dislike algorithms that feed right wing propaganda to people. What about left wing propaganda? What about sites that don't feed right wing propaganda? How would you craft rules that address what you see as the problem? The devil is in the details. I totally get you want to attack Facebook and its ilk, but regulation can entrench the incumbents and reduce competition if not thought through.

3. Thanks for answering the question this time. So advertisers are getting taxed, which will increase the cost of advertising for companies seeking to drive awareness for their products. Some potential effects of this:

* It raises the cost of advertising. This means that you'll need higher margins for the increased cost of advertising to make sense as an advertiser. Given the higher cost/margin requirement, you may see smaller advertisers drop-off, having the potential to reducing competition in different markets. This would be a good thing for big companies/incumbents.

* Depending on the size of the tax, it could put upward pressure on prices elsewhere, as businesses attempt to maintain their margins and account for the increased marketing costs.

* Tax collection will add a new operational cost for ad networks. Google and Facebook, given their size and profitability will be able to deal with this cost easier than most.

* If it reduces total ad revenue for publishers, you'll likely see a bunch of smaller companies go out of business that depend on ad revenue. Smaller and local news/content sites will be hurt, leaving fewer sources, and further entrenching the major news companies. Ad revenue models by their nature promote strategies to appeal to the widest audience possible, in order to maximize impressions/engagement. The alternative model in news (just to use as an example) is subscription, which doesn't have that same incentive. The incentive is to maximize paying customers. In some niche areas you may likely see higher quality (which we already see today). But some of the downsides are: less privileged people won't have access to the same information that they may have previously, increased polarization & bubbles as publishers cater to exactly what their paying customers want to hear (and thus continue paying for).


> I totally get you want to attack Facebook and its ilk, but regulation can entrench the incumbents and reduce competition if not thought through.

They're already entrenched. What exactly are you worried about? You are just playing the devils advocate without actually having a point.

> It raises the cost of advertising.

Huge assumption that nothing better takes the place of algorithmic CPC-based advertising, or that a marketplace based entirely on ad-driven consumption is even desirable. It's not even worth discussing the rest of the speculation, because I reject the premise. The well-being of these ad-networks is no where near the top of my priorities list.


1. How do you plan on keeping them small enough? With mandatory federation and compliance with open standards, would this even be a good idea?

2. Sounds like a great way to encourage site moderators to bring down the ban hammer hard and fast.


It is a tough nut to crack. Social media as we know it today brings some benefits. I thank some aspects of "woke" twitter for making me realize the reality of several societal ills, pulling me out of the "theory only" political alignment that is Libertarianism (IMO). Letting everyday people, including the poor, minorities, and targeted groups, have a direct platform to share their story with the world, is a positive thing. In other words, good-faith usage of the platform.

Clearly, there is flipside. Private Facebook groups to share hate speech or otherwise echo awful thoughts. Legions of bots controlled by political organizations or nation-states trying to divide and conquer a population. It is a losing battle to moderate these platforms.

Facebook has no inherent right to exist as it does, or rather, make a profit as it does - and I wonder if extreme measures, such as a partial repeal of section 230, should be considered. Like all laws, small-p-progressive measures should be taken. Small and medium websites should not necessarily be held liable for content posted by others. But as a network grows to have hundreds of millions of users, perhaps they should?

I'm not sure either.


I am not on twitter, deactivate my FB accounts. Not a boycott, rather an attempt at quarantine. I want to remain friends with my friends at both ends of the political spectrum.

I feel that if we collectively understood social graph theory better we could propose tweaks to make these networks more "fair" and by fair I dont mean with a respect towards a point of view, but rather less likely to create emergent, undesirable social phenomena like ISIS, genocide, white nationalist rebellion.

But I have never seen any papers that explore these topics, on the other hand my interests have been elsewhere.


As they say, Libertarianism is awesome in theory and a violent authoritarian dictatorship in practice.


Here's a good idea: Poland's "freedom of speech" bill.

https://polandin.com/51388314/justice-minister-announces-onl...

  Under its provisions, social media services will not be allowed to remove content or block accounts if the content on them does not break Polish law. In the event of removal or blockage, a complaint can be sent to the platform, which will have 24 hours to consider it. Within 48 hours of the decision, the user will be able to file a petition to the court for the return of access. The court will consider complaints within seven days of receipt and the entire process is to be electronic.


I’ve got a reflexive “why did they implement this?” and look for the downside. That law does does appear good, but Poland is on a dark path with it’s politics and leadership.

https://www.amnesty.org/en/countries/europe-and-central-asia...


It might be a matter of perspective. Poland has seen both Communism and Nazi occupation in recent history and can tell when suppression of free speech starts being harmful, on either side of the spectrum. We here in the West tend to be a bit biased.


Poland also has elected officials instituting 'gay free zones', and arguing that whilst Poland is progressive enough to have legalised homosexuality in 1932, people waving rainbow flags and talking about homosexuality are as dangerous as Nazis and Communists. Poland is not free from bias.


If the goal is to avoid capricious removal of content then this is great. If the goal is to avoid the platform being used for harm then this seems insufficient. The law cannot move fast enough to stay current with social trends.


Forcing them to split off Instagram and WhatsApp would be a fine start.


> only used for hate

This reminds me of "BitTorrent is only used by illegal downloaders", "bitcoin is only used by criminals" etc.


Where's the lie.


>>> only used for hate

>> This reminds me of "BitTorrent is only used by illegal downloaders", "bitcoin is only used by criminals" etc.

> Where's the lie.

Right here: https://downloads.raspberrypi.org/raspios_full_armhf/images/...

There are all sorts of legitimate use cases for torrents, including downloading large linux distro install images.


Many games have been using as well BitTorrent to distribute their installation files.


It's a fantastic download protocol for anything large, has the possibility of being "paused" or resumed later, allows for content integrity checking, etc... such as video game files.


I think these days no games use BitTorrent anymore since it’s easier to distribute your own files now and users do not like unexpected large amounts of data usage which can slow down their other applications and use up their data cap


I’m genuinely interested in seeing BitTorrent traffic stats showing usage across various categories.

I’d wager that the vast majority is illegal content but wouldn’t be surprised if software updaters were a dark horse.


Maybe "parlor was only used for hate" but Parler, no. There is questionable content on Parler as much as is on Twitter, Facebook, etc.


Parlor was created in response to hate moderation on other platforms.

It's not too far off to say "Parlor was created as a space for hate". Only used for hate is obviously wrong, but that doesn't change much.


> Parlor was created in response to hate moderation on other platforms.

Actually most people there seem to perceive it as an alternative to the biased nature of Twitter. They didn't want to get banned on Twitter for something that's being tolerated when done by left-leaning groups.


unfortunately, "hate speech" seems to be defined as "whatever makes me upset" where "me" is a group accepted by tge platform

e.g. there are prolife groups that say nothing negative against any people, but get banned for explaining the facts behind abortion and who does it


[flagged]


sure, they might be hate speech, but seems no worse than what i see on daily kos

however, i am talking about an entirely separate group that goes out of its way to not say anything hateful and not condone violence due to tragic occurrences in the past, yet gets banned due to pointing out inconvenient and very uncomfortable truths

if free speech cannot protect uncomfortable yet unhateful speech, it is worthless


Could you provide a couple examples of the types of groups you're referring to?


yes, prolife groups

in the past abortionists have been shot due to what prolifers have said, so now they go out of their way to avoid anything hateful and violence inducing, yet they are getting deplatformed en mass in the fallout from 6 Jan, which is entirely unrelated to their messaging


Is there an example of an anti-abortion group being deplatformed that you can point me towards? I saw Reddit banned a number of subreddits on Jan 6, but I don't specifically recall any of them looking like anti-abortion subreddits.


I am wrong about deplatforming in response to 6 Jan. However, prolife groups are regularly deplatformed without presenting misinformation or hate speech.


Can you point to one?



Liveaction hasn't been deplatformed from what I can see.

https://twitter.com/LiveAction

https://www.facebook.com/liveaction

Looking through their feed they have a real bug up their butt about Planned Parenthood.

Well, I guess they were deplatformed a bit:

https://twitter.com/LiveAction/status/1347927726150324225


Yes the wiki page states their Pinterest account was suspended, along with other media getting dropped from other platforms. Most pro life groups experience the same thing.


> Parlor was created in response to hate moderation on other platforms.

Again, not true.

Parler was created in response to biased moderation of other platforms and was intended to be bipartisan.


> Although parler was only used for hate

I would love to see proof of that completely unsupportable statement. Obviously all social media sites are used for hate. Maybe they were used a higher percentage of the time, I don’t know. But your statement does nothing to add to the discourse and does a considerable amount to harm it.


Which is exactly the problem with trying to put a box around these things, and classify tools as the problem. The problem is people, not platforms, and trying to sort platforms into "full of hate" and "not full of hate", especially with some sort of coherent, crossable line that gets you shut down is never going to work, let alone work equitably.


Even if you wanted to, Facebook can't be de-platformed. They are entrenched in the internet and own their own data centres, are also themselves domain registrars and members part of the ICANN.


Ultimately if someone with guns and tanks decides to deplatform you, it’s happening unless you can get your own guys with guns and tanks to protect you.


Anything can be de-platformed.


ISPs and other networks could in theory refuse to peer with them.

But from what we’ve seen in the past their behaviour would have to be especially egregious for this to be even contemplated.


Also, our economy might not be able to handle the spike in productivity if it were to go away.


Market entrenchment is an unfortunate side effect. The oligopolistic old guard can weather or ignore such criticism against themselves and lobby for their own Section 230 protections, while using such to eliminate small market entrants.


"Parlor [sic] was only used for hate"

What does this mean exactly? All the content on Parler was hateful? Or something else?


The distinction is that Facebook are failing in their moderation efforts while Parler prohibits themselves from even trying. There's also the face that Facebook has such an enormous footprint of positive or neutral content. It would be like shutting down air travel because some planes crash.


Is this not an absurd standard considering section 230 exists? Where the govt literally tells platforms to not moderate? From my understanding Parler has been taking down explicit calls for violence but the bar is just high.


> Is this not an absurd standard considering section 230 exists? Where the govt literally tells platforms to not moderate?

Section 230 does not "literally [tell] platforms to not moderate". It removes some degree of liability for content not produced by the host itself.

If I, on Twitter, make a false statement (libel) against someone, Twitter is not liable for it (I am). Now, Twitter could remove that false statement (on its own or directed by a court as part of or after a suit against me by the injured party). Whether they need to remove it in order to avoid liability would depend on various other circumstances. For instance, if my content is actually illegal (by some measure) then Twitter could actually be legally obligated under other rules. But they remain free of liability for the post itself so long as they (eventually) comply with the law and remove it. But if they permit the illegal content to remain, then they could eventually become liable for it (per other statutes).

Moderation is, as a consequence of the laws in place, both permitted and occasionally mandated.


> From my understanding Parler has been taking down explicit calls for violence but the bar is just high.

Amazon pulled the plug specifically because they were not doing so.

https://www.gadgetsnow.com/tech-news/read-amazons-letter-to-...


You cant really be using a letter from the company that dropped them as a source. Of course they are going to say all of this. Twitter just recently let hang mike pence trend for a while until it was taken down. Did they act fast enough? This is all subject to interpretation. We just had a summer of riots that caused quite a bit of destruction, did they act fast enough there? Again, there isnt a science to this, its highly discretionary.


> You cant really be using a letter from the company that dropped them as a source.

Sure you can. That letter almost certainly was vetted by Amazon's legal department.

Not everything is a grand conspiracy.


To be fair though, Facebook is now a lot more aggressive at removing radical political groups/content.


Facebook is not co-operating with the genocide investigation: https://thediplomat.com/2020/08/how-facebook-is-complicit-in...


Should we shut down radio too, since radio is frequently used in guerilla warfare?

The technology isn’t the problem - the people feeling violent is. How very American is it that we want to immediately stamp out dissenting voices rather than actually solving the problem that these people disagree with (whatever it may be).


I don't really care to try to "solve the problem" that white supremacists have.


And that’s how things escalate.


Some of the "problems" people have aren't solvable. There are people who are just racist or sexist etc. There are certainly a lot of problems these groups have which we can and should solve: poverty, drug addiction, lack of social mobility, education. But sometimes the core issue really is pure unadulterated hate. What then?


That’s a very simplistic view, and false reductionism.

You take no efforts to understand their grievances, hear them out, so their problems never get solved.

By reducing people to simple labels, it might be able to deal with them in your own mind with a reduced cognitive load, but this leads to further alienation and eventually full breakdown of coexistence.

These people are here. They have problems with how things are run. Deny them the soap box and you push them to the ammo box. I hope I never see this in my beloved country.


Yeah, we should just reinstate slavery already so the white supremacists won't be so angry all the time.

/s


How does this help the conversation? Are you engaging me in good faith? If not, why?


So what is your alternative? Do we need to bend over backwards to appease the guys in “six million wasn’t enough” shirts? How much political legitimacy should we give to people who are all-in on ridiculous conspiracy theories? How much attention to Holocaust deniers deserve? Do flat-earthers deserve equal time in geography class?


So you are lumping several groups together who have nothing to do with one another except the fact that you disagree with them.

So the first step in deescalation would be to understand their grievances. What are the big issues they care about? Why do they care about them? Is there a common ground?

The more you call regular Americans who’s parents and grandparents fought and died fighting Germany, Nazi’s, the more you show that you not only don’t understand them and their views, you show you have no interest in learning about their views.

Remember what happened last time there was taxation without representation?

The more you sum up large swaths of people with simple labels, the more you show your own bigotry and lack of empathy.

First try to understand, then find common ground. These people are here to stay, we can either cooperate with them or fight them. Is the best armed population group on the planet the one you want to corner into armed conflict?


I’m speaking more broadly. I don’t feel inclined to solve ISIS’ problems either but they’re not going to go away unless we sit down with them.

Hardline stances seem nice until people get hurt and progress isn’t made.


Shut down by who? Parlers vendors dropped them. They weren’t shut down by the feds.facebooks little vendors are free to take a stand and drop their biggest customer, sure. Facebook rolls it’s own infrastructure so it would take federal overreach to shut them down. This ain’t China, however.


There has been a persistent campaign by people to get Facebook to take responsibility for the consequences of allowing this sort of thing - in this instance, they have served the same role that certain radio stations did during the genocides in Rwanda.

None of this is news to folks who have been grappling with the responsibilities of social media companies, the tension between the value of grass-roots organising (often positive, as we've seen in the likes of the Arab Spring, or Hong Kongers agitating for their rights) as it pivots into these kind of actions.

The sudden interest in this mostly ends up looking like concern trolling from people who have been comfortable enjoying the dominance of "their side" in the political process for decades; "their side" in the overwhelming media presence directly controlled by, or underwritten by, the Mercers, Murdochs; the think tanks underwritten by the Kochs, and so on.

I find it hard to grant much credibility when folks who have, for example, seemed comfortable with the idea that bakers should be able to turn away wedding cakes because of the sexual orientation of the purchaser, or that employers ought to be able to refuse contraceptives to employees because of the religious opinions of the owners now suddenly affecting concern for a slow-motion genocide in a country they likely couldn't locate on a map yesterday.


The difference is no single entity can likely "shutdown" Facebook, since it likely runs its own servers, or has multiple back-ends in case one shuts down.


Also remember that Facebook is (partly?) to blame for the outcome of the 2016 US Presidential Election.


Wasn't Parlor created explicitly because Facebook and Twitter started fact checking content on its platform and white supremacists were feeling unjustly targeted simply because their posts were being flagged as being nonfactual?

It's not like they were going to start checking their sources or having any introspection. That's for wimps and losers.


Parler was used to sell t-shirts by fake trump accounts.


Parlor refused to moderate. Facebook at least makes an honest attempt to.

And in case you haven't noticed governments across the world are stopping Facebook. Zuckerberg has been questioned in Washington and Brussels.


> Although parlor was only used for hate,

There's your difference. Facebook is a network for a couple of billion people to communicate over which incidentally serves to amplify some of their hate. Parler is a niche network to provide a safe space for stuff too extreme for the likes of Facebook.

(and no, Parler shouldn't be forcibly 'shut down' either, but it's unsurprising businesses don't want to transact with it)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: