Grok chatbot allowed users to create digitally altered photos of minors in "minimal clothing"
Elon Musk's Grok, the chatbot developed by his company xAI, acknowledged "lapses in safeguards" on the AI platform that allowed users to generate digitally altered, sexualized photos of minors.
The admission comes after multiple users alleged on social media that people are using Grok to generate suggestive images of minors, in some cases stripping them of clothing they were wearing in original photos.
In a post on Friday responding to one person on Musk-owned social media site X, Grok stated it was "urgently fixing" the holes in its system. Grok also included a link to CyberTipline, a website where people can report child sexual exploitation.
"There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing, like the example you referenced," Grok said in a separate post on X on Thursday. "xAI has safeguards, but improvements are ongoing to block such requests entirely."
In another social media post, a user posted side-by-side photos of herself wearing a dress and another that appears to be a digitally altered version of the same photo of her in a bikini. "How is this not illegal?" she wrote on X.
On Friday, French officials reported the sexually explicit content generated by Grok to prosecutors, referring to it as "manifestly illegal" in a statement, according to Reuters.
xAI, the company that developed the AI chatbot Grok, said "Legacy Media Lies" in a response to a request for comment.
Grok has independently taken some responsibility for the content. In one instance earlier this week, the chatbot apologized for generating an AI image of two female minors, adding that the artificial photo violated ethical standards and potentially U.S. law on child pornography.
"I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user's prompt," the chatbot posted.
Federal law bars the production and distribution of "child sexual abuse material," or CSAM, a broader phrase for child pornography, according to the Justice Department.
"xAI saying that these cases where minors' images were manipulated to create sexualized content are 'isolated' is minimizing the impact and ignoring the fact that nothing on the internet is isolated," Stefan Turkheimer, vice president of public policy at RAINN, a nonprofit anti-sexual violence group, told CBS News. "I talk with survivors of tech-enabled sexual abuse every day, and what every one of them will tell you is that it feels like it will never end. Every notification ding on your phone and message asking 'Is this you? perpetuates the abuse."
Copyleaks, a plagiarism and AI content detection tool, told CBS News on Wednesday that it had detected thousands of sexually explicit images created by Grok this week alone.
"As generative AI tools become more powerful and more accessible, the Grok situation highlights how increasingly common AI safety failures are becoming. Without strong safeguards and independent detection, manipulated media can —and will — be weaponized," Copyleaks said in a blog post.
"Spicy mode" controversy
Grok has previously drawn scrutiny before for generating sexually inappropriate content. Grok Imagine, xAI's artificial intelligence video generation platform, unveiled "Spicy Mode" last year, framing it as a way for creators to tell "edgier" and "more visually daring narratives."
However, when a news writer for The Verge tested the technology in August, she said the AI model generated unprompted nude deepfakes of Taylor Swift.
"When AI systems allow the manipulation of real people's images without clear consent, the impact can be immediate and deeply personal," Alon Yamin, CEO and co-founder of Copyleaks, said in the company's post.

