Watch CBS News

Nudity, hate speech and spam: Facebook reveals how much content it kills

Facebook reveals details on content removal
Facebook reveals details on content removal 06:59

For years, Facebook has relied on users to report offensive and threatening content. Now, it's implementing a new playbook, as well as releasing the findings of its internal audits twice a year, CNET's Jason Parker reports. Facebook released its Community Standards Enforcement Preliminary Report on Tuesday, providing a look at the social network's methods for tracking content that violates its standards, how it responds to those violations, and how much content the company has recently removed.

The report details Facebook's enforcement efforts from October to March and covers hate speech, fake accounts and spam, terrorist propaganda, graphic violence, adult nudity and sexual activity.

Here are a few key takeaways:

  • Facebook disabled about 583 million fake accounts and took down 837 million "pieces of spam" in the first quarter of 2018 
  • Facebook says its technology "still doesn't work that well" when it comes to hate speech
  • 21 million "pieces of adult nudity and sexual activity" were taken down in Q1 2018
  • In Q1 2018, Facebook removed 3.5 million pieces of violent content, 86 percent of which was identified by the company's technology

Facebook's vice president of product management, Guy Rosen, said in a blog post Tuesday about the newly-released report that almost all of the 837 million spam posts Facebook took down in the first quarter of 2018 were found by Facebook before anyone had reported them. He said removing fake accounts is the key to combating that type of content. 

Most of the 583 million fake accounts Facebook disabled in Q1 were disabled "within minutes of registration."

"This is in addition to the millions of fake account attempts we prevent daily from ever registering with Facebook," Rosen said in the post, noting that "most of the action we take to remove bad content is around spam and the fake accounts they use to distribute it." 

The report comes in the face of increasing criticism about how Facebook controls the content it shows to users, though the company was clear to highlight that its new methods are evolving and aren't set in stone, CNET's Parker reports.

The information from Facebook comes a few weeks after the company unveiled internal guidelines about what is -- and isn't -- allowed on the social network. Last week, Alex Schultz, the company's vice president of growth, and Rosen walked reporters through exactly how the company measures violations and how it intends to deal with them. 

The response to extreme content on Facebook is particularly important given that it has come under intense scrutiny amid reports of governments and private organizations using the platform for disinformation campaigns and propaganda. Most recently, the scandal involving digital consultancy Cambridge Analytica, which allegedly improperly accessed the data of up to 87 million Facebook users, put the company's content moderation into the spotlight.

Violations, by the numbers

To distinguish the many shades of offensive content, Facebook separates them into categories: graphic violence, adult nudity/sexual activity, terrorist propaganda, hate speech, spam and fake accounts. While the company still asks people to report offensive content, it has increasingly used artificial intelligence technology to weed out offensive posts before anyone sees them.

Facebook said Tuesday it took down 21 million "pieces of adult nudity and sexual activity" in the first quarter of 2018, and that 96 percent of that was discovered and flagged by the company's technology before it was reported. It said it estimates that between 7 and 9 views out of every 10,000 pieces of content viewed on the social media platform were of content that violated the company's adult nudity and pornography standards. 

"For graphic violence, we took down or applied warning labels to about 3.5 million pieces of violent content in Q1 2018 - 86% of which was identified by our technology before it was reported to Facebook," it said. "For hate speech, our technology still doesn't work that well and so it needs to be checked by our review teams. We removed 2.5 million pieces of hate speech in Q1 2018 - 38% of which was flagged by our technology."

Facebook says AI has played an increasing role in flagging this content. 

"We use a combination of technology, reviews by our teams and reports from our community to identify content that might violate our standards," the company's report says. "While not always perfect, this combination helps us find and flag potentially violating content at scale before many people see or report it."

A work in progress

The report and the methods it details are Facebook's first step toward sharing how they plan to safeguard the news feed in the future. But, as Schultz made clear, none of this is complete.

"All of this is under development. These are the metrics we use internally and as such we're going to update them every time we can make them better," he said.

Facebook said it released the report to start a dialog about harmful content on the platform, and how it enforces community standards to combat it. To that end, the company is scheduling summits around the globe to discuss this topic, starting Tuesday in Paris. Other summits are planned on May 16 in Oxford and May 17 in Berlin. Summits are expected later in the year in India, Singapore and the US.

Meanwhile, Facebook said on Monday it has suspended around 200 apps as part of its investigation into whether companies misused personal user data gathered from the social network. The company has evaluated thousands of apps to see if they had access to large amounts of data, and will now investigate those it has identified as potentially misusing that data, it said in a blog post.

The investigation follows revelations about Cambridge Analytica's collection of user data in March, after which the company was forced to admit it had allowed the data of tens of million of users to be mishandled. The company's CEO Mark Zuckerberg promised the investigation as one of a number of measures put in place to handle the scandal.  

View CBS News In
CBS News App Open
Chrome Safari Continue
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.