AI deepfakes are easier to make, harder to spot, and made to fool you
Generative AI has made deepfakes – videos, audio, and images either made or modified by AI software – easier to create and much harder to spot. And they are used to dupe you, spread misinformation or scam you.
Deepfake video scams have surged 700% over the last three years, according to ScamWatch HQ.
Beyond deepfakes, overall AI scams and fraud are up too; Deloitte's Center for Financial Services predicts that generative AI could lead fraud losses to reach $40 billion in the U.S. by 2027.
V.S Subrahmanian, a data science professor who leads Northwestern University's Security and AI Lab, joined CBS News Chicago to discuss the dangers of deepfakes, AI scams and other vulnerabilities.
The ability to spot deepfakes is essential to cybersecurity, but it' getting harder. Advancements in generative AI mean the general public must always question what they see and hear and ask if they can believe what they're seeing or if they need to verify it. Subrahmanian offered his best tips for spotting deepfakes.
Voice cloning and audio deepfakes have become a favored tool of scammers who contact loved ones and steal money. They can use short audio snippets from social media to clone voices of relatives or friends, Subrahmanian explained, creating convincing, panicked calls about fake accidents or arrests to steal money from you.
Generative AI is also being used to create scam emails. The public has been warned for years to look for grammatical errors or typos to spot those scams, but with AI getting better a new line of defense is needed.
As concerns about generative AI's use for crime have grown, there are some new laws that are trying to tackle the problem.
The Take It Down Act centers on "revenge porn," and makes it a federal crime to post AI-generated, or real, sexually explicit image of someone without their consent.
The AI Lead Act was introduced in the U.S. Senate in September 2024, backed by Illinois Senator Dick Durbin. It would make it easier for people to sue when they believe they've been harmed by AI-generated content.
But Congress has yet to pass substantial regulations for the entire AI industry.