AI videos of child sexual abuse surged to record highs in 2025, new report finds
Artificial intelligence tools are fueling the creation of online child sexual abuse material, according to a new study that documented the increase of photo-realistic AI material containing the content known as CSAM.
Analysts from the U.K.-based group the Internet Watch Foundation (IWF) detected a record 3,440 AI videos of child sexual abuse last year, up from just 13 videos the year prior, a 26,362% increase. Of the AI videos they tracked, over half meet the description of what's known IWF refers to as "category A," a classification that can include the most graphic imagery and torture.
IWF warns that AI technology can have harmful effects on children, whose likenesses can be used by bad actors. The rapidly developing tools can also enable people with minimal technical knowledge to make harmful videos at scale, the internet watchdog group said.
"Analysts believe offenders are using the technology in greater numbers as the sophistication of AI video tools improves," the report says.
The AI videos are part of a larger pool of child sexual abuse material that IWF identified and removed last year. The organization said it responded to over 300,000 reports in 2025 that included CSAM.
U.S. federal law bars the production and distribution of CSAM, which the Justice Department has said is a broader phrase for child pornography.
The report comes amid a backlash against Grok, an AI chatbot developed by Elon Musk's company xAI, after it allowed users to generate sexually explicit images of women and minors. In a December analysis, Copyleaks, a plagiarism and AI content-detection tool, estimated the chatbot was creating "roughly one nonconsensual sexualized image per minute."
The chatbot's actions prompted action from multiple stakeholders, including the European Union, which said it is monitoring the steps X is taking to prevent the creation of inappropriate image content by Grok. On Wednesday, California Attorney General Rob Bonta announced that he was opening an investigation into xAI and Grok.
Following the criticism, xAI said in a safety update posted Thursday on X that it was enacting measures to prevent users from creating photos of people in minimal clothing using Grok.

