Twitter
Advertisement

Fighting fake news: Facebook will let partners fact-check stories including photographs

Facebook has started letting its partners fact-check photographs and videos beyond news articles, and proactively reviewing stories, TechCrunch reported.

Latest News
article-main
FacebookTwitterWhatsappLinkedin

Facebook has started letting its partners fact-check photographs and videos beyond news articles, and proactively reviewing stories, TechCrunch reported.

The social media giant is also preemptively blocking the creation of millions of fake accounts per day, Facebook revealed this news on a conference call with journalists and later in a blog post.

"When you tease apart the overall digital misinformation problem, you find multiple types of bad content and many bad actors with different motivations. It is important to match the right approach to these various challenges. And that requires not just careful analysis of what has happened. We also have to have the most up to date intelligence to understand completely new types of misinformation," Chief Security Officer Alex Stamos said.

According to Stamos, the term fake news is used to describe a lot of different types of activity that Facebook says it would like to prevent. "When we study these issues, we have to first define what is actually “fake.” The most common issues are fake identities, fake audiences, false facts and false narratives," he added. 

Stamos added that there are two main reasons for fake news to spread.

The most common motivation for organized, professional groups is money, Stamos adds. "The majority of misinformation we have found, by both quantity and reach, has been created by groups who gain financially by driving traffic to sites they own. When we’re fighting financially motivated actors, our goal is to increase the cost of their operations while driving down their profitability. This is not wholly unlike how we have countered various types of spammers in the past.

"The second class of organized actors are the ones who are looking to artificially influence public debate. These cover the spectrum from private but ideologically motivated groups to full-time employees of state intelligence services. Their targets might be foreign or domestic, and while much of the public discussion has been about countries trying to influence the debate abroad, we also must be on guard for domestic manipulation using some of the same techniques," he added.

Facebook says that for each country they operate in and for each election they would work to support will be handled differently. "We are looking ahead, by studying each upcoming election and working with external experts to understand the actors involved and the specific risks in each country. We are then using this process to guide how we build and train teams with the appropriate local language and cultural skills. "Rather than wait for reports from our community, we now proactively look for potentially harmful types of election-related activity, such as Pages of foreign origin that are distributing inauthentic civic content. If we find any, we then send these suspicious accounts to be manually reviewed by our security team to see if they violate our Community Standards or our Terms of Service," said Samidh Chakrabarti, Product Manager at Facebook.

 

Find your daily dose of news & explainers in your WhatsApp. Stay updated, Stay informed-  Follow DNA on WhatsApp.
Advertisement

Live tv

Advertisement
Advertisement