New software designed to help media detect deepfakes – but it's just a "drop in the bucket"

New software could help media detect deepfakes

Tech experts are gathering in Southern California to fight the growing threat of so-called deepfakes – videos that are manipulated to change their message, or even who appears in them. While some of these videos are designed to be entertaining, others are raising serious concerns. 

In one deepfake, Facebook founder Mark Zuckerberg appears to be speaking about global domination on CBS News' digital platform CBSN.

"Imagine this for a second. One man with total control of billions of people's stolen data,"  Zuckerberg appears to say. 

Despite CBS asserting trademark infringement and asking Facebook to take down the video for copyright violation, the company has refused, pointing to First Amendment concerns.
 
"So 20 years ago, we worried about fake content in a court of law. But we didn't worry about it going live on YouTube, Twitter and Facebook," explained Hany Farid, a professor of computer science at the University of California Berkeley.
 
Farid's new software program compares real videos to supposedly altered ones. He wants to offer it to mainstream media outlets to help detect deepfakes ahead of the upcoming elections. 

New software could help in the fight against deepfake technology CBS News

"We take the video, we track facial expressions and head movements," Farid said. "We do 190 of those measurements and then we ask, 'How distinct are they, and can they distinguish the real from the fake?'"

Farid explained that the new tool will allow media companies to upload videos that will then be run through an analysis. After that, his team will make an assessment on whether it's real or not. But there are concerns that putting this kind of technology in the hands of the media doesn't address the rapid spread of these types of videos on social media.

"Of course it's a drop in the bucket," Farid said. "The solution is not just technology. The solution is technology, good reporting, better digital citizens, better companies, better policies."
 
An altered video of House Speaker Nancy Pelosi that falsely showed her slurring her words has been viewed more than 3 million times on social media with no clear way to stop it.
 
"The Speaker Pelosi video was not funny. It was meant to impugn her and create chaos within the party … and it worked," Farid said.
 
In a statement to CBS News, Facebook said, "Leading up to 2020 we know that combating misinformation is one of the most important things we can do. We continue to look at how we can improve our approach and the systems we've built."
 
At a House Intelligence Committee meeting last week, federal officials expressed concern that there's no easy fix and worry that the older generation is most at-risk.  

"The audience I'm most worried about is actually not young people on social media. It's the older generation who's come to this technology late," said Clint Watts, A senior fellow at the Center for Cyber and Homeland Security at George Washington University and a Foreign Policy Research Institute fellow.  

"Imagine the implications for fraud," Farid said. "Imagine how this can disrupt our Democratic elections. Imagine how it can incite violence."

f

We and our partners use cookies to understand how you use our site, improve your experience and serve you personalized content and advertising. Read about how we use cookies in our cookie policy and how you can control them by clicking Manage Settings. By continuing to use this site, you accept these cookies.