Update: After the publication of this story, Venmo and LinkedIn sent cease and desist letters to Clearview AI and Facebook provided more information about its communication with the company. This story has been updated to reflect those changes.
Google, YouTube, Venmo and LinkedIn have sent cease-and-desist letters to Clearview AI, a facial recognition app that scrapes images from websites and social media platforms, CBS News has learned. The tech companies join Twitter, which sent a similar letter in January, in trying to block the app from taking pictures from their platforms.
Clearview AI can identify a person by comparing their picture to its database of three billion images from the internet, and the results are 99.6% accurate, CEO Hoan Ton-That told CBS News correspondent Errol Barnett. The app is only available to law enforcement to be used to identify criminals, Ton-That said.
"You have to remember that this is only used for investigations after the fact. This is not a 24/7 surveillance system," he said.
But YouTube, which is owned by Google, as well as Venmo, LinkedIn and Twitter say the company is violating its policies.
"YouTube's Terms of Service explicitly forbid collecting data that can be used to identify a person. Clearview has publicly admitted to doing exactly that, and in response we sent them a cease and desist letter," YouTube Spokesperson Alex Joseph said in a statement to CBS News.
Venmo and LinkedIn provided CBS News with similar statements. "Scraping Venmo is a violation of our terms of service and we actively work to limit and block activity that violates these policies," Venmo said. "We are sending a cease & desist letter to Clearview AI," LinkedIn said. "The scraping of member information is not allowed under our terms of service and we take action to protect our members."
In addition to demanding that Clearview AI stop scraping content from Twitter, the social media platform demanded that the app delete all data already collected from Twitter, according to an excerpt of the cease-and-desist letter given to CBS News.
"Defending and respecting the voices of the people who use our service is one of our core values at Twitter, and we remain committed to protecting their privacy," a Twitter spokesperson said in a statement to CBS News.
CBS News has also learned Facebook sent Clearview multiple letters to clarify their policies, requested detailed information about their practices, and demanded they stop using data from Facebook's products. Although the company continues to evaluate its options, no formal cease and desist letter has been sent.
Ton-That argued that Clearview AI has a First Amendment right to access public data. "The way we have built our system is to only take publicly available information and index it that way," he said.
The company's legal counsel was dealing with the letter from Twitter, he said, prior to receiving the additional letters.
More than 600 law enforcement agencies in the U.S. use the software, according to Clearview. The company would not say how many are free trial subscriptions.
The Chicago Police Department, one of the largest in the U.S., pays roughly $50,000 for a two-year contract with the company. The department said only 30 members have exclusive access to the technology and it does not use the facial matching technology to conduct live surveillance.
"The CPD uses a facial matching tool to sort through its mugshot database and public source information in the course of an investigation triggered by an incident or crime," it said in a statement to CBS News.
New Jersey Attorney General Gurbir Grewal recently ordered state law enforcement agencies to temporarily stop using the technology until they learn more.
Asked what his concerns about the technology were, Grewal said, "I'm not categorically opposed to facial recognition technology. I think used properly, it can help us solve criminal cases more quickly. It can help us apprehend child abusers, domestic terrorists."
But he added, "What I am opposed to is the wide-scale collection of biometric information and the use of it without proper safeguards by law enforcement."
Ton-That also argued that Clearview AI is essentially a search engine for faces. "Google can pull in information from all different websites," he said. "So if it's public and it's out there and could be inside Google search engine, it can be inside ours as well."
But Joseph called that comparison "inaccurate."
"Most websites want to be included in Google Search, and we give webmasters control over what information from their site is included in our search results, including the option to opt-out entirely. Clearview secretly collected image data of individuals without their consent, and in violation of rules explicitly forbidding them from doing so," he said in the statement to CBS News.
The technology used by the app won't be made available to the general public as long as he's running Clearview, Ton-That said, but Wired Editor-in-Chief Nick Thompson said that might not be so simple.
"Clearview says you may be worried about our technology, but it's just used by police departments to catch terrorists and keep you safe," Thompson said. "But if we know anything from the history of technology and the history of Silicon Valley, it's that the initial intended use is not the only use."
Clearview AI keeps all images even if they are taken down from the website they came from, but The New York Times reports that the company is working on a tool that would let people request to have those images removed.
One of our producers took part in the demo, and no matches came up in Clearview's database. The producer's social media accounts are set to private. Ton-That said, "there's no false positives here. It didn't bring up someone who looks like you." A CBS News crew also tested the app, and a search result came up from his personal website. Ton-That did not have the crew's name ahead of time.
Additional reporting by Gisela Perez and Hilary Cook.
for more features.