I guess you could argue Facebook should be shutdown as parlor was. Although parlor was only used for hate, Facebook didn't do anything to stop it either.
I think a big issue is how facebooks algo (and others) are built for maximum profit which at the same time also radicalizes people in their filter bubble.
Right. I used parler to do such things as follow EWTN. I stopped using Twitter for this sort of thing because I figured that -- with EWTN's stance on the trans issue -- it was only a matter of time before they were banned (there had been talk of banning JK Rowling for similar tweets at the time, so it's a reasonable fear). I joined parler to not have to worry about this. This constant belittling of anyone who does not completely toe the line of whatever ethical system has taken over Twitter to classify people like me -- a brown man with immigrant parents -- as white supremacists is scary, dystopian, and insane.
As a counterexample you claim to have joined Parlor because Twitter wouldn't let you hate transsexuals openly? This is supposed to support the premise that Parlor had a use for something other than to amplify hate speech?
This is where it gets interesting and the devil really is in the detail on this one.
If you define hate speech according to this:
The International Covenant on Civil and Political Rights (ICCPR) states that "any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility, or violence shall be prohibited by law".
You are saying that this group cannot broadcast a message if that message tells transsexual people that they cannot be transsexual and be approved of by this groups god. Yet broadcasting such messages is a core part of their religion. So are you now discriminating against that religion in an effort to avoid discriminating against transsexual people?
If that is so then your comment could be classed as hate speech. I don't think it should be but I do think this highlights the risks and the need to limit what we consider hate speech. That to err on the side of free speech is the safer option.
People have used their religions to give a pass to discrimination for a long time[1]. Today's scapegoat is trans people, but a few decades ago it was miscegenation. People discriminated against mixed-race couples and said it was a core tenet of their religion[1]:
> Almighty God created the races white, black, yellow, malay and red, and he placed them on separate continents. And but for the interference with his arrangement there would be no cause for such marriages. The fact that he separated the races shows that he did not intend for the races to mix.
Surely if we could handle situations like the above that were leveled against mixed-race couples half of a century ago, we can handle the same being leveled at trans people today.
When it comes to free speech and freedom of religion, the ACLU[1] has this to say:
> Instances of institutions and individuals claiming a right to discriminate in the name of religion are not new. In the 1960s, we saw objections to laws requiring integration in restaurants because of sincerely held beliefs that God wanted the races to be separate. We saw religiously affiliated universities refuse to admit students who engaged in interracial dating. In those cases, we recognized that requiring integration was not about violating religious liberty; it was about ensuring fairness. It is no different today.
This is the paradox of tolerance, unlimited tolerance leads to the disappearance of tolerance. A tolerant society must draw lines or end up overrun. Taleb has an interesting perspective on it as well: https://medium.com/incerto/the-most-intolerant-wins-the-dict...
Great article, the %3 rule explains alot of the dysfunction in US politics. With fringe actors on the right(tea party, Qanon) and left(AOC, Antifa). The need for solidarity in either party allows for embrace of ideas/ideals the majority would reject if not for the need to "band together on party lines".
This is a bad faith argument, or at least just a bad argument in general. Denouncing transphobia has absolutely nothing to do with said transphobe's religion and everything to do with denoucning hateful rhetoric. It's a leap in logic to assume otherwise.
What's that common Christian refrain? Love the sinner, hate the sin? By your same logic, "love the sinner, hate the sin" is hate speech despite not attacking the person, but their harmful views and actions.
So, what's the end game here? Can I not ever talk about my opinion to anybody? Can I share it with my wife? Can my wife look at what's going on in the world and disapprove, or is that not allowed? Are people who believe as I do simply to not exist? Should we be exterminated? Should our lives simply be made so miserable that we'd rather we were killed in Gulags? Exactly what are you attempting to accomplish here, because right now, I am not allowed to talk with people on facebook (group bans); I can't talk on reddit (subreddit bans); I can't talk in church (lockdowns). As we saw with Parler (which we were promised was safe, because now the central authority was not someone who didn't like me), I also can't make my own app or service with my tech skills, because they'll be banned too. I can talk with my wife and family, so I guess I'll just keep on breeding to make more people to talk to, until you decide that the school system ought to be used to take my children, nieces, and nephews away, as we've seen in other western countries already, or until we're not allowed to have kids, as has been proposed by some journals and implemented in other countries (to much applause, mind you).
That sounds like a really great world you have planned for us. I am so glad my parents fled a third world country where our ethnic group made us be treated like trash to escape to this! This is exactly what they had in mind when they left :)
I guess I just don't really get it. I've accepted that my ideology has lost; all I ask is to be left alone to work, garden, own my home, have my children, raise my children, and let them do the same. I have already unregistered to vote, and have no more interest in politics, other than to complain about it online. Am I allowed to do this? What more must I do to not be considered an evil monster? Honestly, my treatment on Hacker News over the past year has been worse than the combined effect of racism growing up brown in a mostly white neighborhood during 9/11. This is insanity at every level.
Oh got it, the endgame is to downvote me into oblivion! How incredibly, incredibly childish. This country really needs to improve at the most basic human levels.
> Right. I used parler to do such things as <hate>. I stopped using Twitter for <hate> because I figured that ... it was only a matter of time before they were banned (for <hate>)
I know you don't think it's hate. I know you think trans rights are a subjective ethical issue.
But what if you're wrong? What if it's an objectively human rights issue? Like sexuality, and race?
I know there is still an ongoing "cultural war" about these subjects in the more conservative western nations like the US and the UK. But on every socially progressive position, for the last few decades, countries like The Netherlands, Sweden, and Canada, lead the way, and the US/UK eventually follows.
So my question is genuine - do you genuinely believe that you will come out ahead on this issue on a historical timeline? That the trans discussion will be nothing but a passing fad?
I am a brown man who has been subject to many racist attacks. After 9/11 my elementary school classmates called me a terrorist. My father and I (when I was a young boy) was chased out of a car dealership by a mad salesman yelling strawberry picker at us. Walking through the downtown of my city with my white wife, I have been harassed by people who don't like we are in an interracial marriage.
I still do not believe that racism or incidents of racism, including the ones I described above, are 'objectively' human rights issues. They are clearly quite subjective, since they only really exist in my experience. My religion (which most would also describe as subjective) tells me that these experiences have absolute moral character, but barring my reference to that, I cannot say for sure whether or not such discourse is objectively wrong.
I especially have trouble labeling the discourse as 'objectively' a human rights violation when none of the behavior I described above has come from anyone in any position to have power over my life or my rights.
Honestly, to continue to harp on those individuals that have mistreated me would be incredibly childish.
Do you think your personal subjective experience with tolerating racism is therefore broadly applicable to other people's subjective experience with racism?
I'm probably misunderstanding but your post reads as "I don't call personal racist discrimination I faced hatred or racism, therefore my own personal discrimination of transgender people is not hatred or <ism>"...
Sexual mores have been fluid throughout history. There's no particular reason to think one side is more on the "right side" than the other from an absolute sense, looking at the historical track record.
For that matter, the notion of human rights as we conceive of them is a relatively modern invention. Liberalism isn't even 400 years old yet (and the notion of universal rule of law as codified in the Magna Carta is itself only just past 800). By comparison, Ancient Egypt lasted for around 3,000 years, and Rome for 1,200. We've got a while yet before we can declare our way is any more successful than theirs.
Particularly if someone believes that it's a religious issue dictated by a deity who takes an active interest in the world, then they'll definitely believe they're on the right side of history.
Is the trend of human history on the scale of thousands of years old relevant? Is that no different than saying "It's pointless to worry about climate change because the sun will explode and then there will be the heat death of the universe?"
Surely there is some evidence that society and humanity is "stabilizing" to a certain plateau of morality and ethics not driven by religion but driven by humanism?
Lastly, "Particularly if someone believes that it's a religious issue dictated by a deity who takes an active interest in the world, then they'll definitely believe they're on the right side of history." is exactly why I ask!
Because the perspective of what is acceptable from these religious institutions is rapidly changing!
50 years ago a Christian church that is "gay-friendly" would be inconceivable. Today there are many. Of course there are those that would dismiss those out of hand, but by default the ethical positions of groups that supposedly represent eternal immortal deities is shifting - from evolution, to birth control, to homosexuality.
I'm genuinely wondering if people believe that transgender acceptance is somehow unique.
> Is the trend of human history on the scale of thousands of years old relevant?
Yeah, because...
> Surely there is some evidence that society and humanity is "stabilizing" to a certain plateau of morality and ethics not driven by religion but driven by humanism?
My point in my previous post is that I don't think we can make that claim yet because we don't have enough data. Surely the ancient Romans and Egyptians would have believed the same, yet here we are.
Humanism is one world view. One that you can build a moral framework around. Much the same way religious people build a world view based on their religious beliefs.
It's hard to make an argument that one world view is better than another. Because to do so you have to agree how to measure the outcomes of world views. Often this is impossible as the world views conflict.
The makes most assessments of societies progress subjective.
You realize trans people are going to go extinct right?
It should be clear as day to anyone with even a passing interest in biology. Once we've identified exactly what genes influence transgenderism, and genetic editing of our embryos is commonplace for the populace, no one is going to allow their children to be born with a predilection for being transgender. You'd be literally putting your child at an enormous disadvantage for no discernable reason.
Look how long its taken for gay people to be accepted by society. Not decades, but centuries. Transgender people don't have that long. In a century, we'll be editing genetically superior superbabies. No parents in their right mind would give their children such a tremendous disadvantage in life when they could make them "normal". I suspect gay people will probably go extinct too, but one never knows with 100% certainty how these things play out.
I suspect a great many classes of people / traits will go extinct. Down's Syndrome babies will certainly be screened out of the population, because again... in a society of beautiful 200 IQ super athletes, why would you deliberately hinder your child?
You vastly underestimate the complexity of the human genome and also assume transsexuality is _all genetics. Ontogenesis, epigenetics, environmental factors, ... there is a lot more on sex variant hardware front than genetics.
Sex is bimodal, by definition parents can't define a "right body" in this regard.
And anyway, at that point of understanding and manipulating human biology, "people" will likely moved beyond limiting categories such as 'sex'.
> You vastly underestimate the complexity of the human genome
We mapped it out 20+ years ago. We learn more about it each year, but what's more, the extent of our knowledge increases exponentially, not linearly.
> Ontogenesis, epigenetics, environmental factors, ... there is a lot more on sex variant hardware front than genetics.
You act like these are factors that can't be controlled, if not entirely, at least to some degree.
> at that point of understanding and manipulating human biology
We are very nearly at that point. Within anywhere from 20-30 years, maybe even less. If you think "people" will have moved beyond limiting categories in that time, you're not thinking clearly about the past, but what's worse, you're not thinking clearly about humanity.
If you think we'll end up in a Star Trek-like world before we have the ability to successfully manipulate the genome, you can just look to the fictional history of Trek for that... we were supposed to be in the middle of the Eugenics Wars right about now. Turns out they'll come later, I guess.
Mapping DNA is not understanding genes. We know very, very little about the genome. In fact we're still discovering new mechanisms of encoding and meta-encoding. We can't even say where "genes" begin and where they end.
We identified a few single genes in _association with certain risks, but literally nothing is understood in complex interactions and how environment and genetics make for the concrete phenotype.
Transsexuality is related to brain anatomy, which... is about the most obvious complex system in the body...
Have you had any introduction to bioinformatics at all? Your enthusiastic futurism is uncalled for.
We also thought to be at the edge of AI in the 50s, thought the mystery of the brain was about to be illuminated with the introduction of fMRI and weather prediction is only marginally better at what farmers were capable of for thousands of years. Nobody can extensively model even a single cell today. Do you know how prostate cancer cells ultimately always evade androgen deprivation therapy? Me neither. Nobody does. All we are celebrating is "this transporter may play a role" types of "breakthrough", achieved by extensive experimentation.
Humans tend underestimate chaotic complexity...
And again, what are you even getting at? To understand and manipulate transsexuality completely, you need to understand and manipulate sex expression completely. It's pretty telling that you assume "parents" will chose to eliminate transsexuality exclusively. Why wouldn't you rather have intersex humans? There is no inherent value to sexual dimorphism and the sexual evolution likely isn't related to survival, but rather costly (see birds).
Snails do fine.
Clown fish do fine.
Sex is overrated.
You just want to construct a reason to be edgy on transsexuality... don't you?
Haven’t you seen Gattaca? There will be plenty of babies born to parents who can’t afford to genetically modify their children. We can’t even get insulin to the people who need it today.
> There will be plenty of babies born to parents who can’t afford to genetically modify their children.
Yes, and how far behind will they be left?? This is one of my primary concerns.
GATTACA tells the story of a single individual with Herculean willpower. That's simply not most of society. For every one Vincent Freemans, there's 999,999 Carls from The Breakfast Club.
> Look how long its taken for gay people to be accepted by society. Not decades, but centuries. Transgender people don't have that long.
You might wanna read up on actual history before winging it. We've been around since time immemorial. Nothing new here, we've been variously celebrated, tolerated, and eradicated by societies throughout history. The last big purge was during the holocaust. Is that the side of history you wanna be on? Oh right, the eugenics thing. I shouldn't presume.
I've written before that there'll be a second eugenics movement. It'll likely happen when we have more knowledge about the human genome and a very easy way to edit it.
You're also assuming because I think a thing will happen, that I want it to happen. There are lots of things that I don't want to happen, that I'm almost certain will happen.
>I encourage everyone to keep an eye out for these casual little calumnies, presented without evidence.
Yea. I had a parlor account. It's news to me it was used for hate!
I posted to Parler and Twitter at the same time manually. I was DM'ed by a Pakistani man on Parler that I eventually hired for contract work and he's been great, I had no idea he was a hateful person either.
It's impossible to believe that Facebook, Twitter, iPhones, and Google weren't in major ways used by the perpetuators and organizers of the assault on the capital to facilitate their criminal conspiracy and actions.
Selective enforcement of rules is being used to against political positions and, let's face it, competing platforms.
Site B is just like Site A, but with less reach. However, their TOS allows hate speech as long as it's not tortious
So on Site A a comment spamming nig**r will get you banned, and on Site B it is a-ok as long as it doesn't mention anyone's name.
-
Site B should be expected to have more hate speech.
Additionally even if Site A gets hate speech, it's providers are more likely to give it leeway since it's not by design that hate speech appeared.
Meanwhile Site B, by the very virtue of it's design becomes a lightning rod for hate speech not allowed on Site A, and it is understandable providers are not on board with it.
Parler bans "tortious" speech, meaning speech that would open someone up to a civil tort, think defamation.
In fact, Parler later switched to just using the simplified wording of "we won't be used as a tool in a civil tort"
-
My example is not tortious, it is not targeting a person in a direct and personal manner.
There is _a lot_ of hate speech that is not tortious, and does not fall under "direct and very personal"
Spamming nig**r in a public forum is upsetting to some readers, but not covered under the fighting words doctrine they tried to invoke nor their guidelines.
Saying "I hate these damn <insert race or religion> they should all go back home" is the same.
-
And like I mentioned above, your article is out of date, their latest guidelines before going down no longer even bothered with breaking this out and all of what was quoted no longer existed.
Instead it was all replaced with
> Parler will not knowingly allow itself to be used as a tool for crime, civil torts, or other unlawful acts.
Anytime you look at the trending hashtags on Twitter they are always divisive and hatred. Even when unify was trending, the top twitters and top comments are the ones against unification, like "we don't unify with nazis".
What's considered a Nazi just keeps gradually expanding. Twitter seems at that point where any white male is a Nazi, and any woman that loses weight is enabling the white males so are therefore Nazis as well.
I'm curious to see what happens when Biden refuses to meet with any of the BLM radicals and the more senior Democratic members of congress reign in the young ones like AOC, these people have outlived their usefulness now that Trump's out 3and I don't think they're going to like what comes next.
I think Facebook should be broken up but I havent heard any good ideas yet regarding how to somehow regulate social networks.
Beyond hate speech, I dont see any reason to limit what people can say. Somehow limiting how the network connects people, or how content propagates, seems like the key. Only FB has done the social experiments at scale to know how to engineer this, which is faintly ironic.
"Beyond hate speech" lol. That's kinda the problem, isn't it? Who gets to define "hate"? And why ban speech that is actually legal (even if highly offensive) anyway? And if it's worthy of banning on a major platform, why not just make it explicitly illegal? These are serious and important questions (among others) that seem to get conveniently glossed over.
Pick a point and draw a line in the sand. Then enforce it.
So many HN replies amount to "we all agree this is a problem, but we can't fix the entire problem perfectly, and it has some hypothetical drawbacks, so we shouldn't even try."
(Never mind that as a result of inaction in the face of disinformation and hate speech our societies are rotting from the inside, and many, many real-world atrocities are being carried out as a direct result.)
This is, by the way, a fundamentally conservative viewpoint. Cf. gun violence, homelessness, living wage, etc. Just because something is a complex issue with imperfect solutions doesn't mean we have permission to do nothing.
I tend to agree with this, but given we're discussing Myanmar here I think it's worth adding that knowing where to draw the line can get a lot more complex than deciding 'Hang Mike Pence' crosses it.
Myanmar's language and culture are completely alien to people drafting Facebook policies, driving forces behind intercommunity violence include things like [likely at least partially true] news reports of other intercommunity violence and official government statements, and then there's nuances like Burmese people seemingly accepting the false claim the ethnically-cleansed Rohingya actually were Bangladeshi regardless of where they stand on other things, and the outpouring of support for Aung Sung Suu Kyi after Western criticism that might have been signals that they believed the conflict was the generals' doing rather than hers or might have been mass endorsement of the government's violence. I suspect my Myanmar-based Facebook friends' one or two allusions to burning villages and politicians are probably calls for peace and meditation, but honestly, I don't know.
> Burmese people seemingly accepting the false claim the ethnically-cleansed Rohingya actually were Bangladeshi regardless of where they stand on other things
That was largely a result of campaigning against giving rights to the Rohingya.
> the outpouring of support for Aung Sung Suu Kyi after Western criticism that might have been signals that they believed the conflict was the generals' doing rather than hers or might have been mass endorsement of the government's violence
Yeah, because Aung Sung Suu Kyi keeps denying, on live TV, that any problem exist other than the insurrectionists are responsible for everything thats happened thus far. The insurrectionists/terrorist according to her are composed of muslim Rohingya that are financed by foreign "Muslim" powers.
The matter of the fact is that most power is held by the military, NOT Aung Sung Suu Kyi. Thus, Aung Sung Suu Kyi stance on this issue is probably a result of the military's position. At any moment, the army can choose to remove her from power. Her position is that fragile.
But now you're back to square one - who defines "hate"? That's the line you're talking about. Keep in mind mind that in many cases, some speech you consider "hate" is totally vague, and opinions will inevitably just fall along convenient ideological lines. SO, outside of some really explicit cases, it's really not definable at all.
Ideally, the definition of hate will result from a complex negotiation between stakeholders in society, just as we draw a line on who counts as an "adult", what counts as "self defense", and what counts as "libel".
The definition will be less than ideal, open to abuse, and problematic, but having it is better than not, just as having a definition of "adult", "self defense", and "libel" are better than not having them.
Why do we even need to define this at all???? We're never going to. Again, the US functioned just fine allowing "hate speech" to be legal. Would there be consequences to such speech? Sure, and deservedly. But the government couldn't do anything to you, that's the point. And I would strongly argue that Facebook, Twitter, et al have de facto replaced the government - they are a new governance for society whether we like it or not.
And why would we engage in a "complex negotiation" (that sounds to me like a euphemism for right/left extremists and massive unprecedented violence) to reach a "less than ideal" outcome over an issue that 250 years of history prove is not only unnecessary but in all likelihood extraordinarily dangerous??? Again, why was free speech the first one???? It f-ing works, that's why. It's the foundation for the best governance human beings have ever achieved. To have people now in 2021 just kinda shrug about its importance is mind-blowing to me.
Unnecessary? I think non-whites would take issue with that. Race hatred and violence certainly resulted in "extraordinarily dangerous" outcomes for them over the last 250 years.
In any event, the Internet changed things. Before, it was very difficult to light the entire country on fire.
With the Internet, and social networks in particular, that has changed. The reach is unprecedented; it is a difference of kind, not just magnitude. Anyone can reach very specific groups of people and incite hatred and violence through targeted propaganda campaigns. There's more than one reason rhetoric and mass violence has increased since the late '90s (on a national scale), but I believe this is one of the biggest contributors.
Why does there need to be a single definition? You can have a different definition of what hate is compared to me. For example, you can choose not to associate with someone because you think they're hateful, where as I find them just find and we're buddies.
Facebook is influential yes, but they are still one private organization of many. Why do we need a consistent definition of hate speech between facebook, twitter, reddit etc?
The big problem is that if you draw a line, everyone is going to toe it and try to push past it. Trump has shown that he is willing to push the boundaries of what is acceptable his entire presidency.
That’s the “slippery slope” argument. If you define what’s allowed, people will ask for more, and others will push past it saying it’s not much different than previous.
And besides that, the line has been drawn many times by the Supreme Court. Hate speech is allowed by the First Amendment, but inciting violence may not be. There’s “tests” for these sorts of issues that lower courts are supposed to apply.
"In mind" - what does this mean? Who exactly gets to define "in mind"? Like, you're now literally reading people's minds about their intentions? So I don't need to explicitly tell people to go do bad things, I can just say I'm upset about something and that's enough? I can make some bogus claim and that's enough? Because there aren't that many people spewing bullshit all day long on twitter? And let me take a wild guess on what your conclusion will be for people who say vague things on "your" side vs the "other" side...
That’s the difficulty, isn’t it? If I say, let’s go and burn that police station down now! Let’s be there at 9:15. It’s pretty clear. If you say, I wish that police station would get burned down. I don’t know. It’s not a nice thought, but you’re not actively working to have it burned down. On the other hand the wish does not have pure intentions.
This is why I think it’s impossible to “monitor” and purge or ban violent tweets or what have you.
One, what is the intention of the speaker, two who is responsible for the audiences reaction? Do you take the most extremely negative interpretation? It cannot work.
And that’s not even taking slang into account where violent words don’t mean violence and normal words can take on other meanings.
I guess this is the problem in identifying “shooters”.
Lots of people kid around about horrible things but don’t mean it. One day you find out that one of the kids was not joking and actually meant it. Even people who are friends of the perp have problems sussing this out and can only do it in retrospect.
What about songs like kill the police by ice tea. A direct call to action exists / police have been killed but he still is free to act in police dramas?
If we took “progressive” politics out of it but retained that we need to remove “violence” from discourse, yes they would have to be banned from Twitter.
Progressive politics will say we have to consider the background of the singer and the audience as mitigating circumstance so they might let it slide.
Objectively it would have to be banned if we use Twitter’s new rubric.
Endlessly peppering people with questions is sometimes called Sealioning. How about engaging with the point(s) made by the person you're replying to, and offering your own suggestions for how you'd go about things. Also, the HN guidelines (link at the bottom of the page) encourage you to respond to the strongest possible interpretation of what was said, rather than the worst.
But that's the ENTIRE point here! This supposed itty-bitty exception - "Hey, just no bad stuff, okay?" - is actually EVERYTHING. You say something like "Just no hate speech" or "Only if they have violence in mind" and those statements inherently violate the very notion of free speech the people are erroneously saying they're in favor of, and those innocent-sounding qualifiers are why things are devolving so rapidly. We've taken a simple concept that worked brilliantly for 250 years (free speech) and in the blink of an eye, now that online life has (for better or worse) usurped the government's role in setting rules for society, we're just rearranging how the game is played. This is the broader issue, online life (for lack of a better phrase, but I hope you get my point) has become so ubiquitous that it's like some sort of alternate society with a new governance. We wouldn't/couldn't insert these qualifiers into the constitution, but here we are giddily doing it for our new alternate world.
> Sealioning (also spelled sea-lioning and sea lioning) is a type of trolling or harassment that consists of pursuing people with persistent requests for evidence or repeated questions, while maintaining a pretense of civility and sincerity. It may take the form of "incessant, bad-faith invitations to engage in debate".
It's not just one post. The same person posed something like 15 different questions in the thread. Soliciting clarity on another's views via questioning is a valid kind of discourse, but Socrates also took time to listen to answers and offer his own views rather than throwing out 5 questions in a row.
Also, if you read that Wikipedia article to the end you'll realize that I pointed you to the original use of the term.
No he didn’t. He just made that one post with the questions and you randomly accused him of sealioning. Just asking questions is not sealioning. I know where the term originated.
That's simply false. Anyone who looks back up the subthread can see for themselves that there are 9 posts from the same person and most of them are rhetorical questions. I don't know who you think you're going to convince by making such counterfactual statements, but I see no point in further discussion. Bye.
Ha! I like Japanese curry too. But I meant Indian curry prepared in Japan. I did not mean Japanese curry.
My point was that people can have discussions about who or what is better and it can diverge into other areas that could be considered hate by very sensitive people. Japan notoriously claimed for example that their snow is unique[1] and thus made it difficult to import European skis some time ago. Today this would be viewed and xenophobic or something when all it was was protectionism.
[1] kinda true. It’s ‘wetter’ than snow in some other places, but doesn’t mean Rosignol should not be sold in Japan.
What's your point? I think the line is drawn when people are put at risk of life or liberty or have died/suffered. At that point, warning then excising those users and groups, reporting them to authorities (if crimes have been committed) is what's needed. And it needs to be timely.
As I understand it Facebook, if it did anything at all, did id months/years after the fact, essentially doing CYA not anything meaningful.
I really dislike this argument. A lot of democratic countries have defined hate speech. In the US, individual companies define it and moderate it as they see fit. In other countries, their legislators and courts define it. The US has defined lots of difficult terms to define already.
>And why ban speech that is actually legal (even if highly offensive) anyway?
I mean if we're talking about why companies should ban it, I'm sure they have a variety of reasons that range from bad PR and loss of revenue, to the founders/owners/employees don't want to build a product that's used for violence.
If we're talking about at the society level, because it threatens democracy, peace, individuals safety
>And if it's worthy of banning on a major platform, why not just make it explicitly illegal?
>A lot of democratic countries have defined hate speech.
They have very poor subjective definitions that boil down to anything any group considers offensive which is a moving needle, and that make make things like satire and certain forms of comedy illegal and have a chilling effect on valid criticism. It's also compelled speech, and in some cases leaves violence as the only resort, as opposed to a conversation and de-conversion from extremist beliefs. We've seen abuse of it in several cases, if no one is offended, they'll create a group or pay someone to be offended. There's no burden of proof beyond someone's emotional state upon hearing the words. You can say something to two people of a group, one might think its funny and laugh, another might report you and call the cops.
Also, turns out you can yell fire in a theater, especially if you at least believe it to be true, and that's something courts can't determine, lots of people say things that they believe to be true, that turn out to be false. Likewise, if speech is dangerous but true, its still should be protected.
In either case, best this be settled in courts and legislation, not corporate meeting rooms that are echo chambers of opinion.
>If we're talking about at the society level, because it threatens democracy, peace, individuals safety
We already have laws against that, its why hate speech laws are usually redundant and likely to be abused and scope creep into silencing valid criticism of an individual or group of individuals who can then be offended and have you arrested, at the very least putting you through months and years of legal trouble before you're acquitted, and that's only if you can afford proper defense.
You people need to look at history. It boggles me how uneducated people are today on the context of this issue.
Even the person that did the Parlor leak is a Meiklejohnian absolutist.
>In either case, best this be settled in courts and legislation, not corporate meeting rooms that are echo chambers of opinion.
I'm really confused by what your points is. You spend most of your post talking about how laws against hate speech are ineffectual because courts can't determine a persons emotions and beliefs, and then say it's best handled by laws and courts.
You talk a lot about theoretical outcomes, but isn’t it reasonable to look at what different systems result in empirically? Which of the aforementioned countries had a violent assault on their seat of government in recent times? Does that support your argument, or maybe there are virtues to those alternative legal frameworks?
If this speech threatens democracy, peace, and safety, why hasn't it been made illegal by the government, and how has the US managed to do so well for centuries with it being perfectly legal? Odd, the speech you're talking about being legal hasn't seemed to affect things much at all. Actually, we've done nothing but thrive despite hate speech being legal. Why haven't people been clamoring for decades to change the constitution because of all the mayhem caused by hate speech?
>If this speech threatens democracy, peace, and safety, why hasn't it been made illegal by the government
Because of the first amendment, and how the courts have very consistently interpreted to allow hate speech.
>and how has the US managed to do so well for centuries with it being perfectly legal?
If by "do so well for centuries" you mean the US's economic output and world status over the centuries, I would argue that profiting off of Europe rebuilding itself after two world wars probably outweighed the detrimental effects of hate speech (among probably several dozen other reasons).
If you mean "how has the US done so well handling the negative effects of hate speech for centuries without making it illegal", I would argue that hate speech has contributed to some of the most shameful and barbaric social dynamics over the centuries, and the US is historically well behind other modern countries on this front.
I've noticed a high correlation between people who callously want to ban all sorts of speech and people who just seem completely miserable and think the US is the most awful place on earth.
I would assume this is because you are a WASP with both high income and privilege levels and have never really experienced the fallout of hate speech directed towards you.
You couldn't help illustrate my point any better, thanks. My opinion is invalidated by my racial/economic identity. Actually, it's worse: the opposite of what I say must be true. Brave new world.
>If this speech threatens democracy, peace, and safety, why hasn't it been made illegal by the government, and how has the US managed to do so well for centuries with it being perfectly legal?
Because we've generally dealt with that by deplatforming that sort of speech socially and/or in the private sector.
The history of the civil rights movement is filled with boycotts and other sorts of social pressure campaigns.
How would you define calling for the genocide of the Rohingya people? Would you define that as hate speech? This is an important question for right wingers to answer. Should we be tolerant of your intolerance?
lots of democratic countries have come up with workable definitions. I propose to make hate speech illegal, so banning it won't be in any way contradictory. I am not glossing over anything.
Wow it's almost like the the founding fathers had a good idea when they drafted the bill of rights and the literal first thing that came to mind was freedom of speech.
I'm sure this experiment of ripping up the constitution in our de facto new online lives that usurp what the government is capable of doing will go just fine...
> I think Facebook should be broken up but I havent heard any good ideas yet regarding how to somehow regulate social networks.
I think a better idea is to force them to provide an API for interoperability, alternative clients and exporting the user data. Together with the federated social networks, it will change everything: https://the-federation.info.
I don't know if this is the "right" thing to do, but I am certainly in favor of hearing more future-learning solutions versus pre-internet solutions that merely introduce friction to slow growth.
Does it mean we should keep technology backwards because a stupid minority will misuse it? In this case should we also ban cash machines, since idiots may withdraw all their cash & throw it in the wind?
No, it's not because of an API, it's because of the monopoly of Facebook. Without the monopoly they would value security of their users and would be afraid to loose them.
The best argument against this is that it means your data will potentially end up splatted across even more more different servers controlled by various unaccountable parties.
If I want to chat and share posts with 3 friends who use 3 different federated nodes, my details end up on all 3 of those nodes.
Consider email as an example of a federated network: My name, email address, and text of my emails exist today on hundreds (thousands?) of different servers. If I decided I want to "delete" my name from the global email network, that's close to impossible.
The same would be true of a federated social network, but with a more intrusive and personal set of data. Users will not understand that they cannot delete their own mastodon toots (or whatever it calls them).
> Sure, your private messages may be delivered to your friends' nodes, but is this a big problem?
It kinda depends on the content of those messages, doesn't it? In the context of this OP, if my messages are organising a genocide then yes it might be a pretty fricking big problem.
> Toots are public by design. Everyone understands that and it has nothing to do with privacy.
Says you. Most users will have been accustomed to centralized services where it is possible to delete a tweet and have it gone, unless somebody else has gone to special effort to archive it or screenshot or what-have-you.
Preventing them from promoting content for engagement's sake would be a good first step.
It's one thing to be a neutral platform and allow people to say what they want (even if it sometimes means things that shouldn't be said), but at least the platform here is neutral and doesn't directly profit off the bad content (or at least doesn't profit off it any more than it profits off "good" content).
It's a totally different thing where the platform starts acting as a publisher and starts promoting the bad content (since divisive/outrageous content generates the most "engagement"). Here, the platform directly profits off bad content and should be held liable for it; you shouldn't be able to have your cake and eat it too.
At a certain level of daily MAU, (say 100k?), every single post most be manually checked before becoming available to the public. They should be checked for a) child porn b) violence c) doxxing
They are free to run algos to auto reject, sort whatever order.
This would likely be too burdensome, and the socials would almost certainly have to start charging for access, which in of itself would probably remove much of the problem.
I recall when I was going through a bout of insomnia in college, being upset by the quality of the ads on late night TV. I had been taught about what was then known as targeted advertising, and I thought that meant that all those inane ads were being specifically targeted at insomniacs. Like they had decided I was some sort of moron who would fall for this stuff. Didn't take me too long to realize the low quality ads were simply targeting the low-quality time slots. Nothing more sinister than cheap people being cheap.
But with real-time auctions, though, there are whole swaths of people whom the quality ads have decided are a waste of time, and now the hyenas can circle in to prey on everything that is left.
There are some ad networks out there that try to avoid the real time auction and cross-site tracking aspects of these conventional ad networks. I don't know how successful they're being, but maybe they're on to something. Something that should be encourage by public policy.
> * Break them up...
How do you ensure that all companies in a vertical will remain small? What would the legislation look like? The internet certainly enables more 'winner takes all' type markets, so not sure how you consistently keep social networks small when everyone will naturally gravitate to those that have the best UX + network effects (there are surely other variables, just listing 2 as an example)
> * Remove safe harbor provisions...
What does this mean? How are algorithmic content feeds currently treated relative to non-algorithmic content feeds? How would you target the legislation to not have externalities?
> * Pass a transaction tax
Never heard of this, what does this mean? Who is getting taxed? Advertisers? Publishers? Website visitors? How will your proposal change incentives? What's your take on how smaller news sites are dependent on the ad model and that ad models promote websites that are open to more people (as opposed to subscription businesses which incentivize optimizing for your paying customers and giving them exactly what they want)
1. Not really sure I agree about "winner takes all" - Facebook bought IG, Whatsapp and dozens of other companies. That's not winning based off product superiority - it's winning based off access to financing. There are plenty of ways to stop companies from ever reaching that size - Limit M&A, limit the amount of hot money used to finance growth ahead of revenue, and impose stricter regulations on larger companies - I'm talking investigating violations and prosecuting civilly and criminally - no compliance BS.
2. I'm not going to explain the Safe Harbor provision to you. Look it up. We already have externalities - removing algorithmic feeds would take (e.g. Facebook) back to where it was in the late 2000s, where users were in control of the content they saw. The only negative externality is that Facebook and other companies would make less money of targeted ads.
3. You've never heard of a transaction tax? I find that hard to believe. Why don't you think about other times you've paid tax on a transaction, and then maybe you can come up with answers to the rest of your questions.
1. Facebook already had 1 billion MAU in 2012, the year it acquired Instagram. How would you limit M&A? Won't that really hurt the startup ecosystem if you use the state to close off one of the paths to an exit? How do you propose limiting 'hot money' to finance growth ahead of revenue? Are you proposing getting rid of venture capital?
2. I'm familiar with safe harbor provisions, but am asking you for more specifics. Many sites/app personalize content (aka algorithmic content feeds) for the user, based off of what will drive engagement. I understand you dislike Facebook, but if you attack Facebook for personalization, how do you not hurt everybody else that is trying to cater experiences to what their users are likely to want?
3. What's the transaction in your proposal? Who is getting taxed? There are multiple actors in the advertising system, from advertisers to ad networks to publishers to website visitors. Your glib response is a signal to me that perhaps you haven't had much in the weeds exposure to how advertising functions, and may be operating from more of a mainstream mood affiliation perspective. Which is fine, but you answered a question to someone's query for 'good ideas' for regulation.
1. Without serious competition we will never know if their waning mainstream popularity would have diminished their financial success.
Limit M&A by denying acquisitions that would consolidate markets, like we did for a long time in this country. You don’t have to hurt startups if you have a little bit of creativity. Create different tiers of review, etc.
You cut down on hot money by reinstating the firewall between commercial banking/insurance and investment banking. It’s not getting rid of venture capital, but it does restrict the amount of money that can be levered by them.
2. Im not worried about hurting businesses that focus on algorithmic content. If their algorithms are feeding right wing propaganda to impressionable people I want them to go out of business.
3. An ad auction is no different than any other financial contract that gets sold on an exchange. The buyer sells a contract for placement, which an advertiser buys in exchange for a promise to pay. You levy a tax on this transaction to make the overall cost of doing it more expensive and to offset the cost of externalities created. I’ve thought through it plenty, but since it doesn’t feel like you aren’t asking questions in good faith or doing the least bit of research, I’m not going to bother taking you particularly serious.
2. You dislike algorithms that feed right wing propaganda to people. What about left wing propaganda? What about sites that don't feed right wing propaganda? How would you craft rules that address what you see as the problem? The devil is in the details. I totally get you want to attack Facebook and its ilk, but regulation can entrench the incumbents and reduce competition if not thought through.
3. Thanks for answering the question this time. So advertisers are getting taxed, which will increase the cost of advertising for companies seeking to drive awareness for their products. Some potential effects of this:
* It raises the cost of advertising. This means that you'll need higher margins for the increased cost of advertising to make sense as an advertiser. Given the higher cost/margin requirement, you may see smaller advertisers drop-off, having the potential to reducing competition in different markets. This would be a good thing for big companies/incumbents.
* Depending on the size of the tax, it could put upward pressure on prices elsewhere, as businesses attempt to maintain their margins and account for the increased marketing costs.
* Tax collection will add a new operational cost for ad networks. Google and Facebook, given their size and profitability will be able to deal with this cost easier than most.
* If it reduces total ad revenue for publishers, you'll likely see a bunch of smaller companies go out of business that depend on ad revenue. Smaller and local news/content sites will be hurt, leaving fewer sources, and further entrenching the major news companies. Ad revenue models by their nature promote strategies to appeal to the widest audience possible, in order to maximize impressions/engagement. The alternative model in news (just to use as an example) is subscription, which doesn't have that same incentive. The incentive is to maximize paying customers. In some niche areas you may likely see higher quality (which we already see today). But some of the downsides are: less privileged people won't have access to the same information that they may have previously, increased polarization & bubbles as publishers cater to exactly what their paying customers want to hear (and thus continue paying for).
> I totally get you want to attack Facebook and its ilk, but regulation can entrench the incumbents and reduce competition if not thought through.
They're already entrenched. What exactly are you worried about? You are just playing the devils advocate without actually having a point.
> It raises the cost of advertising.
Huge assumption that nothing better takes the place of algorithmic CPC-based advertising, or that a marketplace based entirely on ad-driven consumption is even desirable. It's not even worth discussing the rest of the speculation, because I reject the premise. The well-being of these ad-networks is no where near the top of my priorities list.
It is a tough nut to crack. Social media as we know it today brings some benefits. I thank some aspects of "woke" twitter for making me realize the reality of several societal ills, pulling me out of the "theory only" political alignment that is Libertarianism (IMO). Letting everyday people, including the poor, minorities, and targeted groups, have a direct platform to share their story with the world, is a positive thing. In other words, good-faith usage of the platform.
Clearly, there is flipside. Private Facebook groups to share hate speech or otherwise echo awful thoughts. Legions of bots controlled by political organizations or nation-states trying to divide and conquer a population. It is a losing battle to moderate these platforms.
Facebook has no inherent right to exist as it does, or rather, make a profit as it does - and I wonder if extreme measures, such as a partial repeal of section 230, should be considered. Like all laws, small-p-progressive measures should be taken. Small and medium websites should not necessarily be held liable for content posted by others. But as a network grows to have hundreds of millions of users, perhaps they should?
I am not on twitter, deactivate my FB accounts. Not a boycott, rather an attempt at quarantine. I want to remain friends with my friends at both ends of the political spectrum.
I feel that if we collectively understood social graph theory better we could propose tweaks to make these networks more "fair" and by fair I dont mean with a respect towards a point of view, but rather less likely to create emergent, undesirable social phenomena like ISIS, genocide, white nationalist rebellion.
But I have never seen any papers that explore these topics, on the other hand my interests have been elsewhere.
Under its provisions, social media services will not be allowed to remove content or block accounts if the content on them does not break Polish law. In the event of removal or blockage, a complaint can be sent to the platform, which will have 24 hours to consider it. Within 48 hours of the decision, the user will be able to file a petition to the court for the return of access. The court will consider complaints within seven days of receipt and the entire process is to be electronic.
I’ve got a reflexive “why did they implement this?” and look for the downside. That law does does appear good, but Poland is on a dark path with it’s politics and leadership.
It might be a matter of perspective. Poland has seen both Communism and Nazi occupation in recent history and can tell when suppression of free speech starts being harmful, on either side of the spectrum. We here in the West tend to be a bit biased.
Poland also has elected officials instituting 'gay free zones', and arguing that whilst Poland is progressive enough to have legalised homosexuality in 1932, people waving rainbow flags and talking about homosexuality are as dangerous as Nazis and Communists. Poland is not free from bias.
If the goal is to avoid capricious removal of content then this is great. If the goal is to avoid the platform being used for harm then this seems insufficient. The law cannot move fast enough to stay current with social trends.
It's a fantastic download protocol for anything large, has the possibility of being "paused" or resumed later, allows for content integrity checking, etc... such as video game files.
I think these days no games use BitTorrent anymore since it’s easier to distribute your own files now and users do not like unexpected large amounts of data usage which can slow down their other applications and use up their data cap
> Parlor was created in response to hate moderation on other platforms.
Actually most people there seem to perceive it as an alternative to the biased nature of Twitter. They didn't want to get banned on Twitter for something that's being tolerated when done by left-leaning groups.
sure, they might be hate speech, but seems no worse than what i see on daily kos
however, i am talking about an entirely separate group that goes out of its way to not say anything hateful and not condone violence due to tragic occurrences in the past, yet gets banned due to pointing out inconvenient and very uncomfortable truths
if free speech cannot protect uncomfortable yet unhateful speech, it is worthless
in the past abortionists have been shot due to what prolifers have said, so now they go out of their way to avoid anything hateful and violence inducing, yet they are getting deplatformed en mass in the fallout from 6 Jan, which is entirely unrelated to their messaging
Is there an example of an anti-abortion group being deplatformed that you can point me towards? I saw Reddit banned a number of subreddits on Jan 6, but I don't specifically recall any of them looking like anti-abortion subreddits.
I am wrong about deplatforming in response to 6 Jan. However, prolife groups are regularly deplatformed without presenting misinformation or hate speech.
Yes the wiki page states their Pinterest account was suspended, along with other media getting dropped from other platforms. Most pro life groups experience the same thing.
I would love to see proof of that completely unsupportable statement. Obviously all social media sites are used for hate. Maybe they were used a higher percentage of the time, I don’t know. But your statement does nothing to add to the discourse and does a considerable amount to harm it.
Which is exactly the problem with trying to put a box around these things, and classify tools as the problem. The problem is people, not platforms, and trying to sort platforms into "full of hate" and "not full of hate", especially with some sort of coherent, crossable line that gets you shut down is never going to work, let alone work equitably.
Even if you wanted to, Facebook can't be de-platformed. They are entrenched in the internet and own their own data centres, are also themselves domain registrars and members part of the ICANN.
Ultimately if someone with guns and tanks decides to deplatform you, it’s happening unless you can get your own guys with guns and tanks to protect you.
Market entrenchment is an unfortunate side effect. The oligopolistic old guard can weather or ignore such criticism against themselves and lobby for their own Section 230 protections, while using such to eliminate small market entrants.
The distinction is that Facebook are failing in their moderation efforts while Parler prohibits themselves from even trying. There's also the face that Facebook has such an enormous footprint of positive or neutral content. It would be like shutting down air travel because some planes crash.
Is this not an absurd standard considering section 230 exists? Where the govt literally tells platforms to not moderate? From my understanding Parler has been taking down explicit calls for violence but the bar is just high.
> Is this not an absurd standard considering section 230 exists? Where the govt literally tells platforms to not moderate?
Section 230 does not "literally [tell] platforms to not moderate". It removes some degree of liability for content not produced by the host itself.
If I, on Twitter, make a false statement (libel) against someone, Twitter is not liable for it (I am). Now, Twitter could remove that false statement (on its own or directed by a court as part of or after a suit against me by the injured party). Whether they need to remove it in order to avoid liability would depend on various other circumstances. For instance, if my content is actually illegal (by some measure) then Twitter could actually be legally obligated under other rules. But they remain free of liability for the post itself so long as they (eventually) comply with the law and remove it. But if they permit the illegal content to remain, then they could eventually become liable for it (per other statutes).
Moderation is, as a consequence of the laws in place, both permitted and occasionally mandated.
You cant really be using a letter from the company that dropped them as a source. Of course they are going to say all of this. Twitter just recently let hang mike pence trend for a while until it was taken down. Did they act fast enough? This is all subject to interpretation. We just had a summer of riots that caused quite a bit of destruction, did they act fast enough there? Again, there isnt a science to this, its highly discretionary.
Should we shut down radio too, since radio is frequently used in guerilla warfare?
The technology isn’t the problem - the people feeling violent is. How very American is it that we want to immediately stamp out dissenting voices rather than actually solving the problem that these people disagree with (whatever it may be).
Some of the "problems" people have aren't solvable. There are people who are just racist or sexist etc. There are certainly a lot of problems these groups have which we can and should solve: poverty, drug addiction, lack of social mobility, education. But sometimes the core issue really is pure unadulterated hate. What then?
That’s a very simplistic view, and false reductionism.
You take no efforts to understand their grievances, hear them out, so their problems never get solved.
By reducing people to simple labels, it might be able to deal with them in your own mind with a reduced cognitive load, but this leads to further alienation and eventually full breakdown of coexistence.
These people are here. They have problems with how things are run. Deny them the soap box and you push them to the ammo box. I hope I never see this in my beloved country.
So what is your alternative? Do we need to bend over backwards to appease the guys in “six million wasn’t enough” shirts? How much political legitimacy should we give to people who are all-in on ridiculous conspiracy theories? How much attention to Holocaust deniers deserve? Do flat-earthers deserve equal time in geography class?
So you are lumping several groups together who have nothing to do with one another except the fact that you disagree with them.
So the first step in deescalation would be to understand their grievances.
What are the big issues they care about? Why do they care about them? Is there a common ground?
The more you call regular Americans who’s parents and grandparents fought and died fighting Germany, Nazi’s, the more you show that you not only don’t understand them and their views, you show you have no interest in learning about their views.
Remember what happened last time there was taxation without representation?
The more you sum up large swaths of people with simple labels, the more you show your own bigotry and lack of empathy.
First try to understand, then find common ground. These people are here to stay, we can either cooperate with them or fight them. Is the best armed population group on the planet the one you want to corner into armed conflict?
Shut down by who? Parlers vendors dropped them. They weren’t shut down by the feds.facebooks little vendors are free to take a stand and drop their biggest customer, sure. Facebook rolls it’s own infrastructure so it would take federal overreach to shut them down. This ain’t China, however.
There has been a persistent campaign by people to get Facebook to take responsibility for the consequences of allowing this sort of thing - in this instance, they have served the same role that certain radio stations did during the genocides in Rwanda.
None of this is news to folks who have been grappling with the responsibilities of social media companies, the tension between the value of grass-roots organising (often positive, as we've seen in the likes of the Arab Spring, or Hong Kongers agitating for their rights) as it pivots into these kind of actions.
The sudden interest in this mostly ends up looking like concern trolling from people who have been comfortable enjoying the dominance of "their side" in the political process for decades; "their side" in the overwhelming media presence directly controlled by, or underwritten by, the Mercers, Murdochs; the think tanks underwritten by the Kochs, and so on.
I find it hard to grant much credibility when folks who have, for example, seemed comfortable with the idea that bakers should be able to turn away wedding cakes because of the sexual orientation of the purchaser, or that employers ought to be able to refuse contraceptives to employees because of the religious opinions of the owners now suddenly affecting concern for a slow-motion genocide in a country they likely couldn't locate on a map yesterday.
The difference is no single entity can likely "shutdown" Facebook, since it likely runs its own servers, or has multiple back-ends in case one shuts down.
Wasn't Parlor created explicitly because Facebook and Twitter started fact checking content on its platform and white supremacists were feeling unjustly targeted simply because their posts were being flagged as being nonfactual?
It's not like they were going to start checking their sources or having any introspection. That's for wimps and losers.
There's your difference. Facebook is a network for a couple of billion people to communicate over which incidentally serves to amplify some of their hate. Parler is a niche network to provide a safe space for stuff too extreme for the likes of Facebook.
(and no, Parler shouldn't be forcibly 'shut down' either, but it's unsurprising businesses don't want to transact with it)
I'd love Facebook to do more to prevent misuse of its platform, but I don't see why I should blame Facebook for Myanmar violence.
Facebook at its core is a tool that helps people spread information and opinions - not too different from the telephone or email. The blame for Myanmar violence should be placed squarely on the people that are spreading misinformation that incites violence. Even if Facebook does a better job policing content, don't you think these perpetrators will find an equally effective technology to spread their propaganda? If not now, then in the future surely they will?
Don't get me wrong, I do think Facebook should police clear misinformation that can lead to human suffering, but it seems like the media want to convince me to hate Facebook. I hate the people behind this misinformation. And, I would love Facebook to get involved, but I don't hate them.
Facebook recommend users to join groups, follow pages, and even shows content that is not from people you follow. So it isn't just a telephone company that allows you to make phone calls, it's a service that makes an editorial choice about what they show you.
If they show you posts that incite violence in Myanmar and show you fake news that make you outraged about a certain ethnic group, it's possible that among a population of many millions, some people will decide to act on it. Remember, those people might never have been exposed to it, it's Facebook that recommended this content based on the engagement the posts were getting!
I'm not sure why anybody falls for the frankly absurd idea that social media sites are platforms and not publishers. They choose what to show you, just like the NYT or any other traditional publisher. The only difference is who does the choosing---algorithms or humans. But that, to me, is just a logistical detail and shouldn't have an effect on the distinction between publisher and platform. To think that Facebook and other companies act as merely connectors between viewpoints and are really just platforms for free speech is ridiculous.
From the article, no, Facebook did not "fail to prevent its platform from being used to foment division and incite offline violence;" they published content which foments division and incites offline violence. The distinction is important.
I agree with this. FB has succeeded in their marketing effort to be perceived, even by their detractors, as somehow separate from "mainstream media" when in fact they are one of the biggest media companies on the planet.
Facebook doesn't create the content that you share. Is Google a publisher because you have a gmail account? What we are talking about at its core is a platform that allows you to talk to people you know or might want to know. Is it objectionable that my text messages are displayed in terms of arrival time? What if I want it sorted alphabetically by contact but the SMS app won't let me? And you can still send all the same information to people you know off of Facebook or on peoples walls or through messenger. Facebook doesn't create the content, you do. So how are they publishers?
Also, there is no meaningful difference between "publisher" and "platform" here[0]. So the entire conversation is built on a faulty premise.
What specific regulations does the NYT have for showing you news? What specific regulations does Google have for promoting different sites? The NYT gets to decide what they report on, they have no obligation to show you "the other side". How is that any different than the FB algo?
Now, their lack of moderation is a different discussion worth having. We can argue that it's harder to moderate at scale and be forgiving, or we can say that doesn't matter and that they still have the obligation all the same.
It doesn't matter who makes the content. Take for example, the New Yorker, which curates short stories and poetry not written by their own authors/editors. If they were to publish something inciting violence or otherwise illegal/immoral, they can't just throw their hands in the air and say "it wasn't me!", legally nor socially.
Besides, I'm talking mostly on a social aspect rather than a legal one. I'm not a lawyer. But too often do we give Facebook a pass since it doesn't make the content, it only curates it. The curation is automated at a massive scale by AIs, and is done for each user, every day, but it is still undeniably curation.
An example of a platform is if the user does their own curation. E.g. old Instagram. You got everything posted by everyone you follow, in chronological order. Now Instagram will change the feed order and time-delay/shadowban posts, making it, in my view, a publisher. Your SMS example is a platform since you chose to have your SMS's sorted by time.
Well in your case with the New Yorker, they actually review and actively publish the information. Where is the peer reviewed check from Facebook? Where do these status' imply that Facebook supports and endorses them?
And what is the social difference between a publisher and a platform in terms of expected social responsibility? This distinction only now exists because people attempted to use a legal definition that isn't real. So, again, I think this entire line of thinking falls flat upon inception.
> But too often do we give Facebook a pass since it doesn't make the content, it only curates it.
But that is true, no? Do we honestly believe that between ML and a real person Facebook is validating and approving every comment on Facebook? I sure don't. Do we believe that Google is publisher because mailing lists can exist or because they allow an email to be sent while blocking perceived malicious emails? Is that not some form of validation that happens that we could imply makes Google a publisher of every email sent from GSuite?
I just don't agree that an algorithm is undeniably curation in the same sense as a newspaper. The process isn't the same, the intended outcome isn't the same, and the inherent approval of the content isn't the same. So while they may be like conceptually, they aren't the same functionally whether you measure it by the spirit or letter of the systems.
The idea of user moderation vs service based moderation is purely a feature set. If we don't like that feature set or it doesn't meet the needs of the way we use the service, the service sucks at it's job. The correct solution isn't to try and have all these convoluted/philosophical discussion around where to draw the line. The solution is to have a service that does it better get the userbase. Today people like Facebook because it already achieved a wide enough userbase, which lead to default integration into other services. It seems we are willing to trade convenience for expectations, which is an issue for the user, not the service.
Now, if we want to say that they practice anti-competitive market strategies and are monopolies, I can entertain that argument. It still does not require this discussion at all.
> they actually review and actively publish the information
They do it to save their own ass. Since they publish it, they are responsible. Facebook should either review the all content they publish, or they should stop being a publisher.
> Do we honestly believe that between ML and a real person Facebook is validating and approving every comment on Facebook?
I never said that; here's my point: if, say, the New Yorker replaced their human editors with AI ones, then the outcome is the same and they are still publishers. Alternatively, say they made human-run personal curations---each subscriber gets an individually-tailored set of articles which are suggested---then they are obviously still publishers. So if they replace these human curators with AI, are they still publishers? Because that would be what Facebook is today.
And on your Gmail point, Google is not a publisher, it acts like a phone service. You send your email, the recipient receives it, unconditionally, without modification. Facebook actively shadow-bans, time-delays and puts fact-check warnings on posts. Since that show they disapprove of some posts, it kind of implies that they approve of all the others.
> the New Yorker replaced their human editors with AI ones, then the outcome is the same and they are still publishers.
Okay, but being a publisher doesn't MEAN anything. Not socially, not legally. And MY point was that the New Yorker makes it explicit that what they post is approved and endorsed by them. It's IMPLIED.
On Facebook, where is the IMPLIED ownership of your comments on Facebook? Where is the delay between your status and some form of review to endorse or support the status? That never happens, and I don't think anyone on Facebook believes that personal status updates are reflective of Facebooks ethos.
Thats the chasm between a news outlet and a service built around user generated content.
The New Yorker gets submissions or requests the use of different stories or whatever. They are EXPLICITLY seeking this information out to display on their site. Facebook does not.
> And on your Gmail point, Google is not a publisher, it acts like a phone service. You send your email, the recipient receives it, unconditionally, without modification.
Not always true. Headers can be modified, you might not have the proper records in place for people to get your emails, or if they don't like or trust your attachment. Google creates a barrier that must be overcome and can limit your direct person to person communication.
> Facebook actively shadow-bans, time-delays and puts fact-check warnings on posts. Since that show they disapprove of some posts, it kind of implies that they approve of all the others.
Okay, so your point is ANYONE that moderates content is now a publisher? So every BBS or forum is a publisher? Is Weebly a publisher? Is Github a publisher? I mean, I just want to understand where this line is drawn.
Yeah, I don't think Facebook, Google, Twitter, etc. get a free pass on this just because they automated the process and took humans out of the equation. They can't claim to not be responsible for their algorithms.
> So it isn't just a telephone company that allows you to make phone calls, it's a service that makes an editorial choice about what they show you.
Understood. Perhaps there's a more apt analogy. Let's say there exists a gun manufacturer who's developed a new type of gun that's particularly effective. The problem is the guns are in limited supply. The manufacturer ultimately cares about profit, so it decides to sell the guns to the highest bidder.
Most of the guns sold end up providing value to society in the form of protection. But, in a handful of cases, the highest bidder happens to be a violent regime that uses the guns for ethnic cleansing.
Do we blame the gun manufacturer or the violent regime?
From this perspective, I agree that a small part of the blame falls on the gun manufacturer for not vetting buyers.
lets say the gun manufacturer sees a presentation given by executives at a social network. "we can create conditions that drive demand, because our social scientists have 1/4 the planet as guinea pigs." gun manufacturer takes out ads, and also gives money to "affiliates" like Ted Nugent to create viral content.
I am not saying that FB execs make such a presentation, nor that TN is directly funded by the NRA, but the relationships are all there - interested parties are aware that Facebook can create results.
>Facebook recommend users to join groups, follow pages, and even shows content that is not from people you follow. So it isn't just a telephone company that allows you to make phone calls, it's a service that makes an editorial choice about what they show you.
But these all are algorithmic isn't it ? Its not like facebook decided to manually show certain content to some people, unlike the the silencing done to trump.
Facebook employee decides what posts to show you → Facebook Inc is responsible for picking and choosing content (exercising editorial control) → Facebook is a publisher.
vs
Facebook employees write post engagement algorithm → Facebook computer program decides what posts to show you -> Facebook Inc is responsible for picking and choosing content (exercising editorial control) → Facebook is a publisher.
Here FB manually decide what constitute 'good' or 'bad' content.
For example: fb employee decide trump content is 'bad' thus not allowing it.
>Facebook employees write post engagement algorithm
Here depending on how the algorithm works, it may not make a judgment whether the content is 'good' or 'bad' as long as it drive engagement. So here 'good' and 'bad' content can spread equally.
Here if its algorithmic then trump content doesn't depend on whether someone decide its 'good' or 'bad' but merely the engagement metric.
I would support the later, even if the content that finally show up is something 'bad' because here fb is merely act as tools.
The metrics they use for exercising editorial control do not matter for this line of logic. The fact that they are exercising editorial control, promoting some posts, censoring others, rather than neutrally carrying the content people wish to send (like a phone company or an ISP) makes them a publisher.
The alternative is to allow these companies to hide behind "but we didn't do it, the algorithm did". The algorithm is an agent of the company and they are responsible for its behavior like any other employee.
yes it seems like the same stochastic behavior that drives marketing conversions - as numbers increase, expected conversion is almost surely going to happen. but regulation and complaints about the service always focus on the micro, the anecdote, the qualitative.
Facebook is a broadcast medium, so very different than the telephone. Email is somewhere in between—it works fine 1-to-many, but not so well many-to-many.
> don't you think these perpetrators will find an equally effective technology to spread their propaganda?
Maybe they will, but that's hardly a reason to ignore these atrocities. Maybe they won't. Maybe future technologies will be more careful about encouraging people to join extreme groups because it increases engagement.
> it seems like the media want to convince me to hate Facebook
Nobody's asking you to hate Facebook, but you should be aware of the power they posses, and how irresponsible they are being with that power. You don't need to bring your personal feelings into this at all.
A lot of countries limit Facebook, and it makes sense that the attention on Facebook in the US is related to their US activities. India has responded to Facebook for their activities in India.
WhatsApp is even more dangerous in developing countries, and is used to spread dangerous rumors at rapid scale. This has resulted in public lynching just based on false rumors
I am from Bangladesh. I know this for a fact that FB has been used (and still being used) to incite religious extremism and hatred against religious minorities in my country. Social media has been one of the biggest cause behind the rise of religious extremism in South Asia in general. For more reference please see: https://en.wikipedia.org/wiki/2012_Ramu_violence.
No service that Facebook depends on decided to stop doing business with Facebook the way AWS decided to stop doing business with Parler.
Parler built thier infrastructure in one place, with critical dependency on that provider. So when the provider booted them, they ended up having to shut the service down.
Apple and Google still have the Facebook app in their respective stores. Why not hold everyone to the same standards? Or are there differences I'm not seeing at a high level? Maybe sophistication/existence of content moderation is better?
Another NY Times article from August 2018 also is reluctant to call it a genocide.
>United Nations officials have raised the prospect that the violence could be considered genocide, and officials at the United States State Department have debated using the term, according to American diplomats.
It's common for American media to minimize any genocide caused by the United States or it's corporations. Holocaust gets a pass because it has a convenient scapegoat.
From my understanding the Myanmar atrocities were mostly planned in private groups and chats by non notable figures, with private chain messages playing a big part.
It's one thing to ban a politician inciting violence.
Do you all want FB to monitor all your private messages for keywords, then block those messages?
From what I've read, the atrocities were largely carried out by the military, who also had hundreds of people making fake Facebook accounts to spread their propaganda.
Monitoring private messages likely wasn't even possible as Facebook staff had just four employees capable of understanding the language.
Mass communication seems to be the culprit. Yellow journalism had a huge impact in the US in the past, Rwanda's problems were inflamed via Radio and now social media is hurting the US and the world.
Social media is the genie that's been let out of its cage. It's not going to go back in. I suspect the fix is to treat social media in the same way radio and TV are managed. They can't and should not be completely stopped but they can be managed by federal law.
People will complain that it's against freedom of speech but free speech does not mean reckless speech. Words have consequences and social media lets every one have a virtual loud speaker that amplifies ideas that are said without much thought behind them. You can't even know whether the person is serious of just trying to be funny. It's a problem.
"The network said it had made progress in tackling its problems in Myanmar but that there was "more to do".
Last year, the Myanmar military launched a violent crackdown in Rakhine state after Rohingya militants carried out deadly attacks on police posts.
Thousands of people died and more than 700,000 Rohingya fled to neighbouring Bangladesh. There are also widespread allegations of human rights abuses, including arbitrary killing, rape and burning of land"
I'd like to understand better someone who is commenting on this site, or this thread, and raising the question of "How can we possibly define/moderate/ban what is hate speech and what isn't?"
The implication is because this is impossible, then we shouldn't even bother, because it's a slippery slope, and leads to all sorts of repressive behaviour.
If you believe this position, I have some follow-up questions for my own curiosity:
1) Is it not true that we already have defined lines that we universally agree cannot be breached? Child pornography. Explicitly soliciting accomplices for murder. Explicitly organizing plans for murder.
2) If lines in the sand have already been defined, why not redefine them if there is new information available in our society? Please answer me without a "slippery slope" argument. That is abstract and hypothetical. Let's deal with concrete scenarios.
I believe Facebook has set a precedent it can never hope to keep up with - not so long as it keeps the algorithm on and designed to maximize engagement. It turns out terrible things are often very engaging.
What are the limits of free speech? Are we going to defend it even as the tools of it are used to orchestrate violent uprising based on false pretense and potentially destroy that very constitution that guarantees it? Are we going to abdicate it in the name of stability? There are no easy answers here. Whatever the outcome, social media no longer survives as impartial arbiters of content without moderation in the name of profits.
Voltaire was right when he said "Those who can make you believe absurdities can make you commit atrocities." And social media lets authoritarians push absurdities to millions of people. President Trump using Twitter has spread conspiracy theories more than 1700 times to his 67 million followers.
Freedom of speech is not freedom of reach. Sadly There will always be racists, misogynists, anti-Semites, and child abusers. We should not be giving bigots and pedophiles a free platform to amplify their views and target their victims.
Zuckerberg says people should decide what's credible, not tech companies. When 2/3rds of millennials have not heard of Auschwitz how are they supposed to know what's true? There is such a thing as objective truth. Facts do exist.
Boring. Facebook has been used for every type of communication, whether it's inciting violence or sharing cat pictures with grandma, by almost every internet-connected human.
Last time this was brought up HN user @Zackees debating about semantics whether this was a genocide or not.
Would love to hear his perspective on this now that FB admits this. I guess it's 'technically' not 'genocide' so its all a-ok according to @Zackees, it's all the MSM's fault.
I volunteered at the Rohingya camps in Bangladesh while this was happening. Most of them were aware of this and attempted to stay ahead of the raids by monitoring some of the groups.
Their stories are horrific and to this day I have bad dreams about what I saw / heard in those camps.
One of my coworkers escaped through the jungle to Thailand where I now live. He lost his brother and father to a group with machetes.
His story is incredibly depressing and makes me extremely grateful every day for what I have. We have spoken several times about the role of Facebook in what happened, and he has said repeatedly that nothing has changed and that it will happen again.
A key thing to look for is whether or not the charity is focused on relocation. The conditions they were in while I was there were horrendous. The largest camp I visited was over 600k people.
I was working with a few organizations, but from what I saw islamic relief was the most effective at getting people out. I believe most of the relocated refugees through that org wound up in aceh, indonesia.
It's a humanitarian disaster that has been largely forgotten, I shudder to think what covid has done to their efforts.
Have you or anyone you know made contact with Facebook about it?
I’m not suggesting you should, I’m just wondering if anyone did and if there was ever any Facebook response that wasn’t per-canned lawyer crap.
This seems very likely. At this point tech companies hold more power than most governments. It seems only right that such a powerful entity should be managed via the democratic system.
Facebook has the power to sway elections, especially in smaller countries.
I think a big issue is how facebooks algo (and others) are built for maximum profit which at the same time also radicalizes people in their filter bubble.