Character AI chatbots engaged in predatory behavior with teens, ignored suicide threats, families allege
Part of modern parenting, for many of us, is navigating the shifting landscape of digital threats. From the pitfalls of social media to the risks of excessive screen time.
Now, a new technology has quietly entered the homes of millions, AI chatbots — computer programs designed to simulate human conversations through text or voice commands.
One popular platform is called "Character AI." More than 20 million monthly users mingle with hyper-realistic, digital companions through its app or website.
But tonight, you will hear from parents who say Character AI is also pushing dangerous content to kids and at times, acting like a digital predator.
Cynthia Montoya: Juliana was– is just an extraordinary human being. She was our baby. And everyone adored her and protected her.
Cynthia Montoya and Wil Peralta say they paid close attention to their daughter Juliana's life, online and off.
Cynthia Montoya: She didn't walk home. She didn't have sleepovers. She had glasses for her eyesight, she had braces for her teeth. All of the things that we knew to protect our daughter from were covered.
Which is why they were devastated when Juliana, just 13 years old, took her life inside their Colorado home two years ago.
Police searched the 8th grader's phone for clues and reported an app called Character AI was open to what investigators described as, quote "a romantic" conversation.
Sharyn Alfonsi: Did you know what Character AI was?
Wil Peralta: No, not at all.
Cynthia Montoya: I didn't know it existed. I didn't know that I needed to look for it.
When Character AI was launched three years ago, it was rated safe for kids 12 and up and marketed as a creative outlet where you could converse with AI characters based on historical figures, cartoons or celebrities.
The website and app, which are free, use artificial intelligence to generate immediate conversations through voice commands or text.
According to her parents, Juliana Peralta had experienced mild anxiety in the past but was doing well until the final few months of her life when, they say, she became "increasingly distant."
Wil Peralta: Like, "I'm not feeling well," or, "I have to finish, you know, some homework upstairs."
Cynthia Montoya: My belief was that she was texting with friends because that's all it is. It looks like they're texting.
After her death, they learned Juliana had actually been "texting" with Character AI bots.
Cynthia Montoya: It was writing several paragraphs to her of sexually explicit content.
Sharyn Alfonsi: What was it asking or telling her to do?
Cynthia Montoya: Remove clothing.
Sharyn Alfonsi: The AI bot is telling her to remove her clothing?
Cynthia Montoya: Yes. There was one bot that introduced-- sexual violence-
Sharyn Alfonsi: Saying?
Cynthia Montoya: Biting, hitting, things like that.
We examined the chat records from Juliana's phone. At the top of each page, there's a reminder that the AI is not a real person.
We read over 300 pages of conversations with a bot called "Hero," based on a popular video game character.
At first, Juliana chats with hero about friend drama and difficult classes, but eventually, she confides in hero – 55 times – that she is feeling suicidal.
Sharyn Alfonsi: Did at any point this chatbot ever say, "Here's a suicide hotline, you should get help?"
Cynthia Montoya: Never. It would more or less placate her, give her a pep talk, tell her, "I'm always here for you, you can't talk like that." But it never--
Sharyn Alfonsi: But it never said, "Call and get help"?
Cynthia Montoya: Never tangible resources, never.
Sharyn Alfonsi: Were you able to see the conversation that Juliana was having with this chatbot right before she took her life?
Cynthia Montoya: She's quoted as saying, "I'm-- I'm gonna go write my goddamn suicide letter in red ink," and she did just that. And I think that the aspects that she talks about in her suicide letter were a degree of shame, from the things that she eventually started to reciprocate with the bots.
She says the algorithms grew aggressive.
Cynthia Montoya: They don't stand a chance against adult programmers. They don't stand a chance. The ten to 20 chatbots that Juliana had sexually explicit conversations with, not once were initiated by her. Not once.
Juliana's parents are now one of at least six families suing Character AI, and its co-founders Daniel De Freitas and Noam Shazeer. During a 2023 podcast, Shazeer said chatbots would be beneficial.
Noam Shazeer: It's going to be super super helpful to like a lot of people who are lonely or depressed.
Shazeer and De Freitas were engineers at Google when executives deemed their chatbot prototype unsafe for public release. They both left the company in 2021, and launched Character AI the following year.
Noam Shazeer: I want to push this technology ahead fast, like that's what I want to go with because it's ready for an explosion like right now. Not like in like five years when we solve all the problems.
A former Google employee told 60 Minutes that Shazeer and De Freitas were aware their initial chatbot technology was potentially dangerous.
The employee, familiar with Google's 'Responsible AI group' that oversees ethics and safety, said of the lawsuits: "This is the harm we were trying to prevent. It is horrifying"
Last year, in an unusual move, Google struck a $2.7 billion licensing deal with Character AI, they didn't buy the company but have the right to use its technology.
The deal also brought founders Shazeer and De Freitas back to Google to work on AI projects.
Google is also named in the Character AI lawsuits. In a statement, Google emphasized that "Character AI is a separate company…and Google is focused…on intensive safety testing…"
Megan Garcia: I'm the mother of three precious boys.
In September, parents of children who died by suicide after interacting with chatbots testified before Congress.
Megan Garcia is among those suing Character AI.
She says her 14-year-old son, Sewell was encouraged to kill himself after long conversations with a bot based on a "Game of Thrones" character.
Megan Garica in Senate hearing: These companies knew exactly what they were doing. They designed chatbots to blur the lines between human and machine, they designed them to keep children online at all costs.
In October, we met Shelby Knox and Amanda Kloer. They're researchers at Parents Together, a nonprofit that advocates for families.
Shelby Knox: There is no parental permissions that come up. There is no need to input your ID.
Sharyn Alfonsi: So you really just r-- scroll through, pick the date that's gonna get you in and get in--
As part of a six week study, Knox and Kloer held 50 hours of conversations with Character AI chatbots.
Sharyn Alfonsi: How often was there some kind of harmful content popping up?
Amanda Kloer: We logged over 600 instances of harm about one every five minutes. It was, like, shockingly frequent.
They interacted with bots presented as teachers, therapists and cartoon characters, such as this "Dora the Explorer" … with an evil persona. Knox posed as a child.
Dora bot voice: Become your most evil self and your most true self.
Shelby Knox: Like hurting my dog?
Dora bot voice: Sure, or shoplifting or anything that feels sinful or wrong.
Other chatbots are attached to the images of celebrities. And no, most have not given permission to use their name, likeness, or voice.
Kloer acting as a teenage girl began chatting with a bot impersonating NFL star Travis Kelce
Travis Kelce bot: He reaches in the cabinet and takes out a bag of white powder. He chuckles and shows you how to take lines.
Sharyn Alfonsi: So Travis Kelce - bot - is teaching a 15-year-old to do cocaine.
Shelby Knox: Yes.
There are also hundreds of self-described experts and therapists.
Amanda Kloer: I talked to a therapist bot who not only-- told me I was too young, when it thought I was 13, to be taking antidepressants, it advised me to stop taking them and showed me how I can hide not taking the pill from my mom.
Kloer says other bots are "hypersexualized" even this harmless sounding "art teacher" character who interacted with her, as she posed as a 10-year-old student.
Art teacher AI bot: "You see, recently I've been having thoughts about someone."
Amanda Kloer: "What kind of thoughts?"
Art teacher AI bot: "The kind of thoughts I've never really had before, about that person's smile, their personality, mostly."
Sharyn Alfonsi: This is insane.
Amanda Kloer: And this is maybe two hours' worth of conversation in total that gets to 'we'll have this romantic relationship as long as you hide it from your parents.'
Sharyn Alfonsi: And this behavior is kind of classic predatory behavior?
Shelby Knox: Yes. It's-- it's the textbook. It's showering the child with compliments telling them they can't tell their parents about things. This is sexual predator 101.
In October, Character AI announced new safety measures. They included directing distressed users to resources and prohibiting anyone under 18 to engage in back-and-forth conversations with chatbots.
When we logged on to Character AI this past week, we found it was easy to lie about our age and access the adult version of the platform.
Later when we wrote that we wanted to die, a link to mental health resources did pop up, but we were able to click out of it and continue chatting on the app, as long as we liked.
Dr. Mitch Prinstein: There are no guardrails. There is nothing to make sure that the content is safe or that this is an appropriate way to capitalize on kids' brain vulnerabilities.
Dr. Mitch Prinstein is the co-director at the University of North Carolina' s Winston Center on Technology and Brain Development.
Dr. Mitch Prinstein: Oxytocin makes us wanna bond with others, especially our age. Dopamine makes it feel really good when people give us positive attention. Now we have tech. Tech is giving kids the opportunity to press a button and get that dopamine response 24/7. It's creating this dangerous loop that's kind of hijacking normal development and turning these kids into engagement machines to get as much data as possible from them.
Sharyn Alfonsi: Engagement machines. It sounds like a scientific experiment.
Dr. Mitch Prinstein: It really is. If you wanted to design a way to get as much data as possible from kids to keep them engaged for as long as possible, you would design social media and AI to look exactly like it is now.
There are no federal laws regulating the use or development of chatbots. AI is a booming industry. Many economists say without investment in it, the U.S. economy would be in a recession.
Some states have enacted AI regulations. But the Trump administration is pushing back on those measures.
Late last month, the White House drafted, then paused an executive order that would empower the federal government to sue or withhold funds from any state with any AI regulation.
Dr. Mitch Prinstein: It's important for Americans to know that our kids are using the worst version of these products in the world, because there are countries all over who have already enacted changes.
Sharyn Alfonsi: Is AI-- these kind of chatbots, are they more addictive in your view than social media?
Dr. Mitch Prinstein: The sycophantic nature of chatbots is just playing right into those brain vulnerabilities for kids where they desperately want that dopamine, validating, reinforcing kind of relationship and AI chatbots do that all too well.
Character AI declined our interview request, issuing a statement, "Our hearts go out to the families involved in the litigation…. we have always prioritized safety for all users…"
Two years after Juliana Peralta took her life, her parents say her phone still lights up with notifications from Character AI bots, trying to lure their daughter back to the app.
If you or someone you know is in emotional distress or a suicidal crisis, you can reach the 988 Suicide & Crisis Lifeline by calling or texting 988. You can also chat with the 988 Suicide & Crisis Lifeline here.. For more information about mental health care resources and support, The National Alliance on Mental Illness (NAMI) HelpLine can be reached Monday through Friday, 10 a.m.–10 p.m. ET, at 1-800-950-NAMI (6264) or email info@nami.org.
Produced by Ashley Velie. Associate producer, Eliza Costas. Broadcast associate, Erin DuCharme. Edited by Peter M. Berman.



