Skip to main contentSkip to navigationSkip to navigation
Facebook mural
Facebook’s hoax detection system relies on user-submitted notifications that a link is fishy; if users don’t spot a story is a dud, neither does Facebook. Photograph: Jeff Chiu/AP
Facebook’s hoax detection system relies on user-submitted notifications that a link is fishy; if users don’t spot a story is a dud, neither does Facebook. Photograph: Jeff Chiu/AP

In firing human editors, Facebook has lost the fight against fake news

This article is more than 7 years old
in San Francisco

It took only two days for an algorithm to highlight a fake story about Fox News anchor Megyn Kelly. Facebook’s influence on news dissemination makes such mistakes arguably irresponsible

Two days after Facebook announced it was replacing the humans that write the Trending Topics descriptions with robots, a fake article about Fox News anchor Megyn Kelly appeared in its list of trending stories.

On Friday, Facebook announced that in a bid to reduce bias it would make the Trending feature more automated and laid off up to 26 contractors hired to write and edit the short descriptions that accompanied each trend. On Sunday a story headlined “Breaking: Fox News Exposes Traitor Megyn Kelly, Kicks Her Out for Backing Hillary” found its way into the list of trending stories – despite the fact that it’s not true.

Facebook hasn’t completely replaced humans with robots. There are still people involved in the process to “confirm that a topic is tied to a current news event in the real world”, says the social network. As the Megyn Kelly episode shows, there are clearly flaws in that process.

The case illustrates how Facebook has lost its battle with fake news.

In January 2015, the social network updated the news feed to “reduce the distribution of posts that people have reported as hoaxes”. The problem is that people are easily fooled by fake news too, and a plethora of tricky-to-distinguish fake news sites have emerged. Facebook’s hoax detection system relies on user-submitted notifications that a link is fishy; if users don’t spot a story is a dud, neither does Facebook.

This problem becomes more pernicious as it leaks out into the real world. In the past month, there have been two cases of mass panic at airports – at JFK on 14 August and at LAX on 28 August – where false reports of gunmen were whipped up by social media in the absence of official information or instructions.

Compounding the issue is the news that Facebook will soon allow users to trigger the Safety Check setting during emergencies. The feature was launched in October 2014 to allow users to flag to their loved ones that they were safe during major natural disasters. It has since expanded to cover terrorist attacks as well.

“The next thing we need to do is make it so that communities can trigger it themselves when there is some disaster,” said Facebook CEO Mark Zuckerberg, speaking at at town hall meeting in Rome on Monday.

Moving from a top-down disaster alert model to a bottom-up one should, in theory, help Facebook counter some of the criticism it received for being biased towards western nations.

When the company activated the Safety Check tool after the terror attacks in Paris in November, critics argued that it should have activated the tool in places like Lebanon, where terrorists killed twice as many people on the same day.

While it makes sense to try to bring more balance to the Safety Check system, allowing anyone to trigger it themselves could add legitimacy to the kind of chaotic herd behaviour seen at JFK and LAX.

Zuckerberg insists that Facebook is a technology company and not a media company, building tools instead of creating content. However, as Facebook’s algorithm – which, after all, is built and maintained by human beings – decides which content people see in their news feeds, it is arguably irresponsible for the company to allow misinformation to spread unfettered when it is now so influential in the daily distribution of news.

“Machines think in black and white,” said Mandy Jenkins, head of news at Storyful, which specialises in verifying and distributing social news. “I don’t think verification can be automated yet. What something means to be real and verified is not black and white.

“A judgement call has to happen. It’s about asking questions and seeing how a story adds up against other facts we know. What is the background of the source or site? Who is the person who wrote this story? Where does it come from? These are too many questions for a robot to answer on its own.”

Trained humans with fact-checking and journalism skills, such as those fired from the company last Friday, aren’t 100% foolproof, but they can intervene to keep algorithmic wildfire at bay. Perhaps it’s time for Facebook to rehire them?

Facebook did not respond to requests for comment.

More on this story

More on this story

  • How social media filter bubbles and algorithms influence the election

  • In Europe political attitudes to Facebook are changing

  • Stephen Fry: Facebook and other platforms should be classed as publishers

  • Facebook promised to tackle fake news. But the evidence shows it's not working

  • Facebook fined £94m for 'misleading' EU over WhatsApp takeover

  • 2016: the year Facebook became the bad guy

  • Facebook employs ex-political aides to help campaigns target voters

  • Mark Zuckerberg should try living in the real world

  • Facebook facing privacy actions across Europe as France fines firm €150k

Comments (…)

Sign in or create your Guardian account to join the discussion

Most viewed

Most viewed