Watch CBS News

Senators pressure social media giants to crack down on "deepfakes"

Concerns with China's new deepfake app
"There are real privacy concerns" with China's newest deepfake app, expert says 03:33

Washington — Two members of the Senate Intelligence Committee are calling on major tech companies to develop a plan to deal with the proliferation of "deepfakes" on their platforms, according to a letter to the companies obtained by CBS News.

Democratic Senator Mark Warner of Virginia, the vice chair of the committee, and Republican Senator Marco Rubio of Florida will ask 11 companies — including Facebook, Twitter, YouTube, Reddit and LinkedIn — to "develop industry standards for sharing, removing, archiving, and confronting the sharing of synthetic content as soon as possible."

In the letter, the bipartisan pair — who have partnered in the past to raise awareness about other national security challenges, including those related to the advent of 5G wireless technology — say deepfakes "pose an especially grave threat to the public's trust in the information it consumes." "Deepfakes" are video or audio files that have been doctored using sophisticated technology to convincingly depict false or misleading events.

"Given your company's role as an online media platform, it will be on the front lines in detecting deepfakes, and determining how to handle the publicity surrounding them," the senators write in a letter to each company. "We believe it is vital that your organization have plans in place to address the attempted use of these technologies."

The other companies receiving the letter are Pinterest, Snap, Tumblr, Tiktok, Twitch and Imgur.

Warner and Rubio's letter poses seven questions about the companies' current policies on user-posted deepfakes, their technical abilities to detect and track doctored media, and the steps each platform would take to notify users when "problematic content" is removed or replaced. The senators also ask how the companies would verify claims of victims who are depicted in the videos and images. 

"The threat of deepfakes is real, and only by dealing with it transparently can we hope to retain the public's trust in the platforms it uses, and limit the widespread damage, disruption, and confusion that even one successful deepfake can have," they write.

The issue of deepfakes — specifically a doctored video of House Speaker Nancy Pelosi that went viral in May — was raised at a dinner Warner organized with Facebook CEO Mark Zuckerberg in Washington last month. At least some of the other companies have previously engaged with the lawmakers on the topic, according to an aide familiar with the discussions, but this is the first time senators have demanded a concrete response.

Doctored Pelosi video highlights the threat of deepfake tech 02:40

Some of the social media giants have recently begun confronting the issue of deepfakes and publicly discussing at least part of their efforts to detect them. 

In September, Facebook announced it would contribute $10 million to establish a "Deepfake Detection Challenge" along with nine partners. A Facebook spokesman also pointed to other initiatives the company is undertaking to combat deepfakes, including a $7.5 million effort to research manipulated media and various internal efforts to improve detection.

"Deepfake video development and the potential for use by bad actors requires a whole-of-society approach," Facebook's Andy Stone said. "We are committed to working with others in industry and academia to come up with solutions."

A spokesperson for Twitter said fighting deepfakes fall under the company's efforts to combat misinformation and manipulation generally, and pointed to a letter in July from a Twitter executive to Congressman Adam Schiff detailing the company's policy. A LinkedIn representative said the company removes "confirmed fake content" and "invest[s] in systems and technology that give us the ability to monitor, detect, and remove inappropriate content." 

Last week, Google and its "technology incubator" arm Jigsaw released a large dataset of deepfakes that the company created using paid actors and publicly available deepfake generation methods. And last month, Reddit and Twitter banned deepfake pornographic videos from their platforms — though other AI-based content remains.

Wednesday's letter is far from the first time lawmakers have raised alarms about the threat — both Warner and Rubio have spoken publicly for over a year about the implications manipulated video might pose for national security in general and election security in particular. Deepfakes were the exclusive subject of an open hearing held in June by the House Intelligence Committee. And the 2019 Worldwide Threat Assessment, an annual report produced by the U.S. intelligence community, specifically cited deepfakes as a likely component of foreign influence operations, including those expected to be used in the 2020 U.S. elections.

"Adversaries and strategic competitors probably will attempt to use deep fakes or similar machine-learning technologies to create convincing — but false — image, audio, and video files to augment influence campaigns directed against the United States and our allies and partners," the assessment warned. "[T]he threat landscape could look very different in 2020 and future elections." 

View CBS News In
CBS News App Open
Chrome Safari Continue
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.