Hacker News new | past | comments | ask | show | jobs | submit login

I'm not sure what the answer is but something should be done about Facebook's part in the problems in Myanmar. (https://www.nytimes.com/2017/10/29/business/facebook-misinfo...)



> A couple of hours outside Yangon, the country’s largest city, U Aye Swe, an administrator for Sin Ma Kaw village, said he was proud to oversee one of Myanmar’s “Muslim-free” villages, which bar Muslims from spending the night, among other restrictions.

> “Kalar are not welcome here because they are violent and they multiply like crazy, with so many wives and children,” he said.

> Mr. Aye Swe admitted he had never met a Muslim before, adding, “I have to thank Facebook because it is giving me the true information in Myanmar.”

https://www.nytimes.com/2017/10/24/world/asia/myanmar-rohing...

I found this chilling when I saw it on Twitter.


This company is complicit in ethnic cleansing and the top comment in this thread is about activating interest groups to ward off regulation. It's outrageous.


AAAND Zuck has been very publicly polishing up his presidential running shoes for 6 months...


In fairness to Zuck the news about this has only been out pretty recently and they may not have had time to fix things. Hope they do though.


> This company is complicit in ethnic cleansing

I mean no offense, but this comment coming from a user named "IBM" is a bit ironic considering that IBM provided infrastructure support to both the US Japanese internment camp programs and the Nazi concentration and extermination camp programs.

https://en.wikipedia.org/wiki/IBM_during_World_War_II


Facebook isn't actively seeking to do this, people are just using the platform to spread information. Sure, Facebook should and likely will crack down on it...but I'd rather the governments not be involved in what can and can not be allowed on Facebook/Online. People have done this sort of thing for ages, Facebook is just the latest platform.

How are governments supposed to deem what's appropriate when -this very- article states that the fake posts were shared by government accounts.


When governments are using Facebook as a platform for disinformation campaigns, other governments are going to have to get involved to regulate usage. Facebook needs to find its moral compass very quickly.


So which government gets to decide which government is telling the truth or not? Who is the moral arbiter? China? Iran? Russia? Just western countries? Good luck. What will happen is a limit on free speech of individuals with no qualms, and we all know it.

Facebook is comprised of people, all with moral compasses, but when your user base is increasingly growing to be the entire population of earth...it's hard to moderate. People act like Facebook, Google etc could just flip a magic switch and instantly stop disinformation through social media, which is virtually an impossible task.

On top of that, even if it -were- possible, you create moral hazards of who decides what is real or isn't real...and if you had it over to world governments, that's when you get into much deeper shit.


I came to write that facebook is just a carrier, and that people can lie to the masses via a printer, or a radio, or any of a million web outlets or non-FB social networks.

But I was wrong, facebook doesn't just post what you say, they choose what you see. So it's not necessarily that you're seeing stuff that people said of their own volition, it's that perhaps facebook is _choosing_ not to show you contradictory claims because it would make you use facebook less, and cost them money.

So then perhaps yes, some kind of regulation about what kind of choices facebook is allowed to make when choosing what will be shown to whom, and some oversight that that actually happens.

Still, though, it scares me to try legislating speech, since legislation is easy to pass and hard to repeal; hard to get right, and not debated long enough by smart enough people hardly ever.


Wow. That is horrendous.


I was thinking if near the like button they had a report hate speech / fake news button and just didn't show stories much that where that was clicked it would probably make a difference.


Unfortunately, like any kind of downvote system, it would get abused as an "i disagree" button instead.


I was thinking something that didn't notify any users that anyone had pushed it, just made the algorithm share it less and if a lot of people push it maybe submit it to the moderators.


They have a reporting system. It doesn't work (and currently, they don't have any incentive to make it work, because hate speech and fake news are engagement).




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: