The problem is what facebook says is at odds with what the leaked documents purport to show. Trending items, and hand-curated items are not the same thing, and apparently they have been passing one off as the other which is problematic.
It works exactly as FB has publicly stated. As set of topics are algorithmically detected from a variety of sources. Let's say this set has "many many" things in it. Many of which are of low quality. The best comparison I can give you is Twitter trends. Humans approve some of these detected topics to appear in the trending module. Let's just say there are now "many" topics. Yes, curators have the ability to insert a topic in to the trending list, and yes they have ability to make something appear in everyone's trending module, but it's not really used. It's mostly there so that you don't have to wait minutes for trending to detect that aliens have landed on the Washington Mall, or that nuclear war has broken out. In fact, the "everyone must see this now!" has never been used. This candidate set of approved topics is then algorithmically ranked for each person. Using criteria that they said.
If you really care, it uses a GBDT and the newly released Facebook Flow.
Wow, that is horrible. And equally horrible is the fact that you don't seem to find this problematic at all. If the features were/are regularly used is not really relevant: "algorithmically selected" implies a certain non-bias in the process, but if this is not how it really works then one should be much much more skeptical about the validity of any "trending" topics.
It's a common misconception that "algorithmically selected" implies a certain non-bias. Perhaps this myth stems from examples used to explain simple algorithms (e.g. drawing straws, or pulling names from a hat).
Complex algorithms certainly do exhibit bias, and not only that - it's intentional. The whole point of a "trending" topic is to discriminate (i.e. inject bias) trendy topics from less trendy topics.
So you're talking about only adding the bias you want it to add - but I hope you can imagine that's an impossible cause to get 100% correct. There's no rigorous definition of trending; nor is there a rigorous definition of the topics to be selected from, nor are lots of other matters here rigorously defined. There's no way you're going to do better than some fuzzy algorithms trained on real data.
Critically, that tends to mean real data generated by... biased humans.
I would have thought that the algorithm would be entirely agnostic and based on incoming data only without humans to distort it, e.g. I share details with a dozen friends about aliens landing in Central Park, they then share it on and on. Facebooks algorithm will spot this and also others who did the same and then say "Hey, looks like there is a trend growing about aliens in Central Park" and from there would act upon it based on that data only, e.g. this topic was talked about 5 million times in the last minute and therefore bump it up the list to its corresponding position based on other "trending" topics.
Having humans involved will distort that process, thus meaning true trending does not exist. Well, on Facebook at least.
Right. Like the time it was used when it was decided everyone in India got the post about supporting Facebook's Internet.org to save humanity or whatever.
I have a question, mostly unrelated, so I apologize ahead of time. Do actual humans review pictures that are reported? I've been reporting a picture for a few months and get repeatedly "This picture does not violate our community guidelines". It's a picture of the bloody, naked, mutilated corpse of a murder victim with a crucifix shoved down her throat. I have reported it fifteen or twenty times over the last six months with the same answer, that the picture doesn't violate Facebook's community guidelines. What's that about?
That sounds like the sort of question you should ask Buzzfeed, HuffPo or a similar website. Like Google and other companies, Facebook is normally very quick to respond to bad publicity.
There's nothing wrong with it, the problem is calling it 'trending'. Trending implies that these are the things that the users on facebook are talking about. If the list is, in fact curated by facebook employees then that's fine but it shouldn't be passed off as trending.
You are correct, I should have prefaced my statement with, "I think" or "I like to believe." I happen to believe cynicism is a seductive methodology, while sometimes useful. I don't care for FB's hegemony either, FWIW.
I'm inclined to agree with you and the implicit assumption of good faith. However, FB's trend information is an instance of real power and susceptible to the sort of hiding of capabilities and intentions that implies. It's fair for people to ask for verification though of course no one is required to provide it.
It's pretty easy to justify why you would lie, if I think you are an ideologue who truly believes that Facebook trending topics must be curated with a certain slant for the greater good.
When you believe in a conspiracy theory, hearing "why would I lie" doesn't change your mind, quite the opposite. It's also not true that there's "nothing that anyone can say that would prove that any system works as described". There's no general statement that will convince everyone, because everyone has personalized doubt about it.
It's certainly possible to assuage the fears of a conspiracy theorist - take their examples, give a credible, particularized explanation(not a vague "it's algorithmic", but describe the steps in detail), and show some goodwill to address their underlying fears. What you perhaps wanted to say was that you don't really care about convincing conspiracy theorists, which is fine, but very different.
I believe you, I just think that the thing you made is not really minor.
It has incredible reach and is quite effective in claiming attention. I actually had to use uBlock's element hiding function to keep myself from reflexively looking at it and reading a bunch of news stories. I would be surprised if that wasn't true for many people, as that's what it was designed to do.
I guess asking for proof is an unreasonable thing in this case as I can't think of what would actually constitute proof. I do think that additional questions are reasonable in this case as the editorial control of the ranking is currently a topic of national interest. I'm not really sure what that means exactly in this conversation, but I have a vague feeling that things that are big should be examined more closely. That's not super fair to facebook compared with a traditional media company, but this thing is new and isn't really like our models of how media companies like newspapers and magazines work.
It's a little scary because of the power a couple headlines sprinkled here and there have when they're seen be a lot of people. Something that was nothing becomes a scandal, lies become truths in the public consciousness and can't be fixed (look at the antivax movement). Of course, they also have the power to do great justice as well.
I guess, my point after all this text is still that what you've built isn't minor, there will be and probably have been real world impacts from it. Those impacts stem from public trust in FB's editorial position. If the public doesn't trust FB to show unbiased stories, they interpret the state of the world vastly differently than if they trust FB. This is why I think it's reasonable to have questions, even if they may be somewhat pointed at times.
That said, I really appreciate the fact that you took the time to respond to this thread. Talking to the people directly involved is a great way to disabuse one of falsehoods. :)
EDIT:
For an example of a reason why people are interested in this capability, imagine a rouge FB employee were to push the following headline to everyone: "US Nuclear Arsenal Almost Fired at Russian City"
The world would see this post made by one person, and the fact that it was pulled nearly immediately would be seen as evidence by some of a cover up and could add tension and possibly instigate an international incident. Until this generation died, you would have people all over the world that would believe that the US nearly nuked a Russian city without provocation.
We can argue about whether there's an ethical problem, but the assertion that there's any sort of legal problem seems completely arbitrary and partisan IMO.
> Editorial control and algorithmic control are two different things.
It's also a meaningless distinction.
The point of contention is the bias, not whether the bias is imposed by an algorithm or by a human. People would still be complaining if Facebook had tweaked its algorithm to achieve identical outputs instead of employing humans to do it manually.
The truth seems to be somewhere in-between -- it was algorithmically generated and then the output was human-modified.
In general, I think the response to this post demonstrates the prevalence of magical thinking about computers, even by people who ought to know better. Bias is bias regardless of whether it's explicitly programmed behavior, emergent behavior from a carefully trained neural net, or enforced in human processes post-hoc. Distinguishing between these on the basis of where the behavior came from -- rather than whether or not there's a bias -- misses the point.
> I don't see any assertion that there's a legal problem in the parent.
The point of my post wasn't to contradict the parent post. There certainly are people who do claim it's a legal problem.
It isn't meaningless in this case at all. "Trending" suggests that it reflects, so some level, popular opinion about what people are discussing. If the items aren't trending, but are hand curated, then you've essentially lied.