Hacker News new | past | comments | ask | show | jobs | submit login
News Feed FYI: Click-baiting (fb.com)
130 points by btimil on Aug 25, 2014 | hide | past | favorite | 50 comments



Round and round the circle here we go...

This is what "growth hacking" really looks like today...

1. Manipulate a non-perfect signal-to-noise ratio ranking scheme with the most traffic (PageRank/EdgeRank)

2. Gain massive popularity

3. Sell your business to a greater fool

4. Ranking scheme changes rendering your model worthless

Demand Media, Zynga, Socialcam, etc. etc. ....and now BuzzFeed. The list goes on and on. The winners are the investors and the ones creating the ranking themselves, no one else.


I think you'd be right about BuzzFeed if this happened ~a year ago. Now, BuzzFeed has so much content, and they push it so hard on visitors, that I would be surprised if they didn't actually score pretty highly on this metric (although I was a bit unclear on if this was referring to URL level or domain level time on site). Also, so much of BuzzFeed's content are long lists with large images or gifs that people stare at that they aren't going to be hit as hard as UpWorthy (who pioneered the "You Won't Believe..." headline. Finally, BuzzFeed has actually gotten enough traction to now be able to hire real journalists to create actual content.


Buzzfeed's problem, for me at least, is that they've acquired negative brand equity. I reject any content that carries their domain. It's actually rather disheartening to hear NPR and PRI whoring out to them, including WNYC's On the Media, an otherwise excellent program.


I was shocked when I got a link to buzz feed from a friend who basically said "I know the URL, but trust me it's good" I did, and surprisingly it was. This wasn't a quiz or some gif set but a well written article on depression or anxiety I believe.


"And surprisingly it was" highlights the problem.

Buzzfeed made hay as a bottom-feeder. They're now continuing the bottom feeding, with an occasional nugget thrown in.

I don't care to encourage what they've done, nor do I care to go through the crap to find the gold.

They've got negative brand equity. I reject them with extreme prejudice.


Buzzfeed is having a pretty rough time transitioning to an actual journalist enterprise though. They've had a few editors lately fired for outright plagiarizing a majority of their listicles, including ripping directly from Yahoo answers of all places. To combat this they've also recently culled a significant number of old listicles from their 'media-lab' era. I forget the exact number but I want to say it was on the order of 4,000+ articles that were disappeared? I get that they're trying to improve their image but at the same time quote-unquote "actual journalists" wouldn't white-wash their past and would explicitly own up to the mistakes they've made.

tl;dr Original content is the new aggregation growth-hack but can't replace a lack of ethics.


My Facebook feed is full of (IMO) low quality posts from them. Specifically, the "drop everything and watch this ice bucket challenge" videos. Seemingly every day, there is a new post declaring the king of the ice bucket challenge, to stop doing it, x has won, etc.


I suggest unfollowing/unfriending people who post those kinds of content. It sends a clear message that you don't want to see it, and you won't waste time ignoring them.


although I was a bit unclear on if this was referring to URL level or domain level time on site

It's simply time away from Facebook, since that's what they can measure.


Unless while away you are on any page with a facebook 'like' button. In which case they can monitor exactly your behavior. Fortunately there are very few of those. 0_o


They have started iframing sites on mobile, so lets to measure there and easily enough data to not need to do it at all on desktop?


"80% of the time people preferred headlines that helped them decide if they wanted to read the full article before they had to click through"

What in the world do the other 20% want?


and you won't believe what the other 20% wanted.


The other 20% don't really care about click-bait, or haven't analysed what's happening, and won't have a strong opinion about it. I'd put money on people in this 20% clicking on these things a lot more than the other 80%.


They simply don't care and probably wouldn't follow the link anyway.


Suspense!


I make a point to never follow a link containing the strings "You will never believe", "You will be amazed", "We didn't expect what happened next" or similar lazy copy text. I'm considering developing an adblock-plus-like plugin to remove them from the pages I visit.


There's "downworthy", which doesn't remove them, but turns them into "realistic" versions: http://downworthy.snipe.net/ ("Literally" -> "Figuratively", etc)


I'm considering developing an adblock-plus-like plugin to remove them

If you build that plugin, I'd use it (if it's for Firefox).


[deleted]


Hmm, that's not how I imagined them doing it. It seems like Facebook could log the click on their site, and then use the Page Visibility API or even just scrolling events to detect your return – all without a tracking bug on any other site.


I'm sure they're firehosing as much data as they can, including your MITM tricks and share button bugging. And probably a lot more.

And that's why I don't use facebook.


The other nearly-identical HN story has a nearly-identical comment. Did you read the article? It says something quite different:

  One way is to look at how long people spend reading an
  article away from Facebook. If people click on an article
  and spend time reading it, it suggests they clicked
  through to something valuable. If they click through to a
  link and then come straight back to Facebook, it suggests
  that they didn’t find something that they wanted. With
  this update we will start taking into account whether
  people tend to spend time away from Facebook after
  clicking a link, or whether they tend to come straight
  back to News Feed when we rank stories with links in them.


It's worth noting that most/much of Facebook's usage is on mobile...all FB has to do is track when you left the app (via click-through) and when you resumed click-through. I assume this is pretty trivial if the 99%-use-case is the user hitting "back" on the FB app nav.

The same metric could be used for the web app too, in lieu of other sophisticated tracking code. Someone in the other thread mentioned that this would miss users who clicked through an article (or click-into-new-tab) and read it later, but I'd have to imagine that this is a very, very slim use-case.


Search engines do a similar thing to measure how good a search result is. If you come back right away and click on another result then that tells them the site wasn't what you were looking for.


Hmmm, so when I quickly open up 5 results in separate tabs, it's going to assume I didn't think much of the first 4 then?


You are a statistical outlier, below the noise floor.


Chances are good that people who browse that way are looking for niche subjects that are equally outliers, so if there's any value to their traffic at all, search engines would do well to figure out that for X sites and Y visitors, the default browsing behavior assumptions do not apply.


I imagine this could be dealt with by discarding the results when the consecutive clicks occur <5 seconds apart.


Reminds me of a similar action YouTube took against the ReplyGirl (http://en.wikipedia.org/wiki/Reply_girl), by similarly factoring engagement in their ranking algorithm.


Facebook isn't a company I praise often, but this is both a very good change and one which I desperately hope will carry through both to how other sites (HN, reddit, G+, all of which, unlike FB, I actually do use) treat clickbait, and how publishers optimize their own content.

The race to the bottom among aggregators, which started quite some time back with HuffPo (nearly a decade old now) has become quite maddening. I've long since resorted to flagging such content as spam, where possible (curious that comments here suggest FB has an "I don't want to see this" option, G+ most certainly doesn't), and increasingly have resorted to unfollowing or blocking those who post such crud.

Much as xkcd suggested a format for getting bots to contribute usefully to online forums, it would be quite slick if search and social engines would reward actually good and quality content.


I had to stop using Google Plus for just this reason. At some point they decided to add a "hot" category of stories to your feed in the mobile app. There was no way to disable this 'feature' or avoid these spammy stories while still using the app. I can't say I've really missed much.


I'm taking a bit of an enforced G+ holiday, and can't say I'm all that upset.

Search comprehensiveness and speed, the ease of interaction with the Notifications pane, and a few interesting people. That's its upside.

Streams, circles, lack of filtering, overall layout, client bloat, privacy invasion, crap and noise, annoyances across other Google properties: the downsides.

Though I'm seriously wondering where the hell the smart people are these days.


Facebook isn't a company I praise often

As I said before, fixing big companies is one of my favorite topics. As another example, I have said here that Vic Gundotra probably needed to be fired a while before they did.


So now you can punish those annoying click-baits by clicking them and then returning to facebook as fast as possible? Nice. Also, this could be used to "punish" legit links someone does not like...


The better way to punish a link is to click the options drop down menu and select "I don't want to see this."


This is generally where big data helps. Your click is one signal among thousands of impressions / clicks. A single user's clicks shouldn't have a large impact in the aggregate.


I think the argument is that bot-nets can appear to be 1000s of legitimate users and you can use this to undermine competitors.


I like this update. The legacy of BuzzFeed and Upworthy can live on with click-bait in headlines and titles, but now, they'll have to be backed by engaging content. The reason clickbait emerged is that those headlines were engaging and interesting to people. It would be fascinating to go back to the most egregious clickbait/low quality content examples and actually create the content the headline teased.


You won't believe in what way this Twitter account is relevant to this thread: https://twitter.com/SavedYouAClick

Ahem. Sorry :)

It "spoils" clickbaity links. I'm not entirely sure whether it's actually useful or time-saving, but I really do like the idea.



Definitely not good news for Chris Dixon and the other bubble inflators at Andressen Horowitz.

Google Panda: Demand Media FB Feed Change: Buzzfeed

Of course, my thesis predicated upon the supposition that Buzzfeed is nothing more than a clickbait farm. There are a significant minority who feel otherwise, but I am not one of them.


Especially in the last year or two, Buzzfeed has been mixing in genuine, quality journalism with their listicles. For example http://www.buzzfeed.com/longform and http://www.buzzfeed.com/politics

I think Upworthy is the site that should be concerned: they have far less original content and AFAIK the entire premise is based on social (facebook) sharing.


that may be true. but can they get as good at genuine, quality journalism as they are with clickbait?

and given the valuations quality journalism companies trade at these days, maybe that's not something they want to devote serious efforts to.


I'd love to see some Clickbait filters similar to Bayesian spam filters. My initial guess is that any headline with the word "this," second-person pronouns, and future tense (e.g. "you won't believe this blah blah blah") would rank highly.


Aww. I guess this means I'll see less Clickhole in my feed.


As I've commented on G+: the parody is too good. I flag that as spam along with the real clickbait.


Anyone else wonders who are these people that answered their survey? I don't know about you, but Facebook never asked me anything, let alone to compile a survey. My guess is that these people are Facebook employees. What is wrong with that, you ask? It's simple, Facebook is used by 1B+ people, so the results from a survey answered by a few thousand doesn't tell you anything about the general consensus. Even worse, you're only seeing what a very specific niche wants: the American, mostly white, tech-minded portion of the userbase. It's good dogfooding your products to root out bugs, but it's downright reckless to use your own people to make assumptions over the needs of the real user base.


I think you're assuming an awful lot based on the fact that you haven't been surveyed by Facebook.

Facebook has 1B+ people. Do you know how trivial it is for them to run surveys? 100K population surveys, if they want? Many hundreds of them, simultaneously?

I have no insight into how facebook manages its surveys, but I'd be surprised if they didn't have some sort of generalized surveying platform built in to facebook that allows product teams to independently survey more or less anything they want.


"My guess is that these people are Facebook employees"

Why do you guess that? Facebook could get a very accurate assessment of user opinion by surveying 10000 users. If Facebook has over a billion users and surveys 10000 of them, then the probability that I personally get surveyed is tiny, and even the probability that anyone I know gets surveyed is pretty small. So the fact that Facebook didn't survey you or me is not a good reason to think they haven't been surveying their users.


I answered the survey, and I am not a Facebook employee, nor white.

You have no knowledge of the surveying process, it is presumptuous to assume that they carried it out incorrectly simply because you were not surveyed.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: