Hacker News new | past | comments | ask | show | jobs | submit login

There's some interesting analysis in here but overall, I think it's focused on the wrong points. I don't want to ignore the danger posed by propaganda committees like those described in the article, but the author’s dismissal of bots/automation in this space as passé seems totally wrong. She writes: "bots are of increasingly minimal value...". Sure, in 2018, this is true for simple bots, but I doubt bots have said their final word. AI could power bots that are massively more sophisticated than their cousins that spam retweets, not to mention better at slipping through filters designed to catch them. Such next-gen bots might be able to read an article or view an image, decide what it’s about, formulate an "opinion" on it, then generate some sort of response - maybe a comment that's passable for (at least Internet-level) real human discourse.

Imagine such a bot on reddit, for instance. Imagine it's the stereotypical Russian bot, trolling the internet to sow discontent in the West. It might upvote anything about racial strife or immigration issues. It might show up in the comments to support Euro-skeptic candidates in the EU. It would pipe up about the Deep State in the US. Now imagine there's hundreds of thousands of similar bots, across a variety of sites. They would be able to control online discourse to a huge extent and dictate the media/opinion diet of millions of people. An army of these bots would be more powerful than any current propaganda arms, even those with State-level support.







Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: