Watch CBS News

Facebook bans "deepfake" videos, with exceptions

Facebook announced late Monday that it is banning certain types of so-called "deepfakes" — videos altered to appear as though people are doing or saying something they didn't actually do or say.

The policy marks the social media giant's first foray into regulating deepfakes, but it comes with caveats. Facebook Vice President Monika Bickert wrote in a blog post Tuesday that the ban won't apply to "parody or satire, or video that has been edited solely to omit or change the order of words."

Facebook's moderators will remove videos edited through artificial intelligence or machine learning "in ways that aren't apparent to an average person and would likely mislead" viewers. Bickert did not describe how the company would assess an average person's understanding of altered videos.

The new policy swiftly drew criticism that it wouldn't have barred perhaps the most well-known recent example of an altered video to go viral. In May, a video of House Speaker Nancy Pelosi edited to make it seem as though she were slurring her words while speaking at a public event was viewed millions of times.

But even if that video had ultimately been barred, the speed at which it spread means the company will struggle to regulate such videos in the future, said Jason Kint, the CEO of Digital Content Next, a trade group that represents digital publishers.

"We're entirely at risk here in letting Facebook turn this into a censorship debate while ignoring the fact these videos only receive their velocity and reach due to Facebook's algorithm and recommendations not because they're simply allowed to exist," Kint said. "There is a lot of garbage on the Internet. The Pelosi video would be a tree in an empty forest if not for Facebook's algorithms which are also the source of Facebook's immense profits."

Kint also criticized the timing of Facebook's announcement, noting that Bickert is scheduled to appear before a House Energy and Commerce Subcommittee on Wednesday.

"I wish Facebook would proactively push out new policies in order to improve their product for the public, not just 48 hours before hearings or (when) press reports on them," Kint said.

Kint's concern about the speed with which viral videos moves was echoed by Saniat Sohrawardi, a Rochester Institute of Technology researcher who studies deepfakes and methods used to identify them. Sohrawardi said it's a process that can't yet be effectively automated, meaning it's up to humans to identify suspicious videos from among the millions posted to Facebook

"There aren't any very reliable ways to detect the fakes yet, at least none published that would be good for open-world detection," Sohrawardi said. 

And when that technology is created, the fight against deepfakes will have only just begun, he said.

"If we create a good enough detector and it gets popular enough that everybody knows about it, including the attackers, they will theoretically be able to create better fakes using that machine learning," said Sohrawardi.

View CBS News In
CBS News App Open
Chrome Safari Continue
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.