Facebook using artificial intelligence to fight terrorism

Facebook says it's using artificial intelligence to help it combat terrorists' use of its platform.

The company announced that plus a number of other steps it says it's taking as it faces growing pressure from government leaders to identify and prevent the spread of content from terrorist groups on its massive social network. 

Facebook officials said in a blog post Thursday that the company uses AI to find and remove "terrorist content" immediately, before users see it. This is a departure from Facebook's earlier practice of relying on users to flag suspect content for removal.

"Already, the majority of accounts we remove for terrorism we find ourselves," Facebook said. "Although our use of AI against terrorism is fairly recent, it's already changing the ways we keep potential terrorist propaganda and accounts off Facebook."

It also said that when the company receives reports of potential "terrorism posts," it reviews those reports urgently. In addition, it says that in the rare cases when it uncovers evidence of imminent harm, it promptly informs authorities.

Facebook hires 3,000 to monitor Live after murders, suicides

The company shared more specifics on how it's working to thwart terrorist content behind the scenes. One technique involves using image-matching technology to identify and block known terrorist photos and videos from popping up again on other accounts. Another would apply machine learning algorithms to look for patterns in the language of terrorist propaganda so it could be identified and removed more quickly.

In the blog post, Facebook also pledged that its anti-terror efforts would extend to other Facebook-owned platforms including WhatsApp, the encrypted messaging system that's stymied investigators

"Because we don't want terrorists to have a place anywhere in the family of Facebook apps, we have begun work on systems to enable us to take action against terrorist accounts across all our platforms, including WhatsApp and Instagram. Given the limited data some of our apps collect as part of their service, the ability to share data across the whole family is indispensable to our efforts to keep all our platforms safe," wrote Facebook's Monika Bickert, director of global policy management, and Brian Fishman, counterterrorism policy manager.

Last month, Facebook announced it would add 3,000 staff members to monitor huge volumes of live video on the platform to filter for violence.

Facebook has approximately two billion monthly users, and routinely finds its platform in the crosshairs of deadly and terrorism-related events. For instance, the shooter behind Wednesday's attack on a congressional softball game had previously posted "vitriolic anti-Republican and anti-Trump viewpoints" on Facebook, according to the SITE Intelligence Group, which tracks extremists. However, the posts that have come to light stopped short of threatening specific acts of violence.

For now, Facebook says two major global terror threats — ISIS and Al Qaeda — will be the main focus of "our most cutting edge techniques," but the company plans to expand those efforts to other terrorist groups in the future. 

CNET reports that Facebook is also working with law enforcement agencies and other organizations to try to stop terrorists from exploiting social media. And in December, the social network partnered with other tech companies, including Twitter, Microsoft and Google-owned YouTube, to create an industry database that records the digital fingerprints of terrorist organizations.

f

We and our partners use cookies to understand how you use our site, improve your experience and serve you personalized content and advertising. Read about how we use cookies in our cookie policy and how you can control them by clicking Manage Settings. By continuing to use this site, you accept these cookies.