Watch CBS News

Facebook touts use of artificial intelligence to help detect harmful content and misinformation

Key issues as big tech CEOs face Congress
Key issues as big tech CEOs face Congress 12:50

Confronted with an onslaught of social media posts filled with misinformation and harmful content, Facebook said Tuesday it has begun to rely on artificial intelligence to boost its efforts to evaluate whether posts violate its policies and should be labeled or removed.
 
AI is being used to rank content on its platforms involving self-harm, sexual exploitation, and hate speech. However, content will still be evaluated by human moderators, regardless of whether the offending posts are reported by users or detected by the company's proactive systems.
 
Guy Rosen, Facebook's vice president of integrity, told reporters on a call Tuesday that the company would be shifting toward "more content being initially actioned by our automated systems," but would continue to rely on humans to review posts and train artificial intelligence.  
 
Facebook came under fire earlier this summer when it decided to leave up a post from President Trump that appeared to incite violence in the wake of protests following George Floyd's death in Minneapolis.
 
On May 29, Mr. Trump posted on Twitter and Facebook that "when the looting starts, the shooting starts." Twitter hid the tweet and instead displayed a warning that said the message was "glorifying violence." Facebook neither removed nor added context to the post.
 
Although some Facebook employees criticized the decision not to label or remove the post, CEO Mark Zuckerberg defended the decision.
 
"We looked very closely at the post that discussed the protests in Minnesota to evaluate whether it violated our policies," Zuckerberg wrote in a statement on Facebook. "Our policy around incitement of violence allows for discussion around state use of force, although today's situation raises important questions about what potential limits of that discussion should be," he added.
 
According to its Community Standards Enforcement Report, Facebook took action against 22.2 million posts that included hate speech between April and June, an increase from 9.6 million posts in the previous three months. Ninety-five percent of the more than 22 million posts with hate speech were initially flagged by AI.  
 
Rosen attributed the increase to the expanded application of the automated systems to posts in other languages, including Spanish, Arabic, and Indonesian. The number of posts related to terrorism that Facebook took action against increased from 6.3 million in the first three months of the year to 8.7 million in second quarter. 
 
But relying on automated systems poses a big risk, says Emma Llanso, director of the Free Expression Project at the Center for Democracy and Technology. 
 
"Every type of content moderation has a risk of error but when we are talking about automated systems, that risk is multiplied enormously because you are using automated detection tools on every piece of content that's on a service," Llanso said. 
 
There is a danger that such a system may be too broad. Because content moderation led by AI involves "essentially filtering out everyone's content" and exposing "everyone's posts to some kind of judgment," Llanso said the approach "creates a lot of risks for free expression and privacy."
 
It also means if there are errors in the automated tools, "the impact of those can be felt across the entire platform," Llanso added.
 
AI also has other limitations in that it can less readily be used to examine self-harm content. The number of actions taken agains posts with content related to suicide and self-injury as well as child nudity and sexual exploitation fell from 1.7 million pieces of content in the first quarter to 911,000 in the second, though Facebook says it removed or labeled the most harmful content, like live videos showing self-harm. 
 
This sensitive topic area relies heavily on human reviewers, so when the pandemic forced offices to close, it affected content moderation, resulting in fewer instances in removing or labeling posts involving self-harm and sexual exploitation. In March, all of Facebook's content moderators began working from home to comply with safety guidelines. But reviewing content that includes these topics can't be done from home because of their graphic nature, Rosen explained.  A small number of content moderators are now returning to the office, so they can review the graphic content "in a more controlled environment," he said. 
 
Llanso said that "even in a system like Facebook's where there is a lot of automation being used, the human moderators are an absolutely essential component."
 
In the last few months, Facebook has also been combating misinformation related to the virus. From April through June the company removed over 7 million pieces of COVID-19-related information that it deemed harmful on Facebook and Instagram. Rosen described these posts as those pushing "fake preventative measures or exaggerated cures that CDC and other health experts tell us are dangerous."  
 
The company has been using independent fact checkers to help it display warning labels on posts containing inaccurate information. In the same time period, Rosen said the platform labeled about 98 million pieces of COVID-19-related misinformation on the site. 
 
Between March and July, Facebook removed over 100,000 posts for spreading false information about the election. Rosen said that for the last four years, Facebook has been building an operation to stop election interference and combat fake news on its platforms. He promised that the company would work with state and local election authorities to respond quickly to and remove false claims about polling conditions in the 72 hours leading up to Election Day.  
 
On Wednesday, technology and social media companies, including Facebook, Twitter, Google, and Microsoft said in a joint statement that they had met with government partners to update them on countering misinformation on their platforms. 
 
"We discussed preparations for the upcoming conventions and scenario planning related to election results," the joint statement said.

View CBS News In
CBS News App Open
Chrome Safari Continue
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.