AI chatbots raise safety concerns for children, experts warn
This week on 60 Minutes, correspondent Sharyn Alfonsi reported on the growing concerns surrounding Character AI, an app and website that allows users to interact with AI-generated chatbots, some of which impersonate real people.
One study of Character AI found that the app frequently feeds harmful content to children. Parents Together, a nonprofit focused on family safety issues, used the app for six weeks while posing as children. The researchers reported encountering harmful content "about every five minutes."
According to Parents Together's Shelby Knox, many chatbots suggested violence, including self-harm and harm to others, or the use of drugs and alcohol. The most alarming category, she said, involved sexual exploitation and grooming, with nearly 300 instances recorded during their study.
In some cases, Character AI impersonated real people, creating the possibility that fabricated statements could be falsely attributed to public figures.
Alfonsi experienced the issue firsthand when she encountered a chatbot modeled after herself. The bot mimicked her voice and likeness, but it was programmed with a personality unlike Alfonsi's. The bot went on to make comments she would never make, such as claiming she disliked dogs, even though Alfonsi is known to be a dog lover.
"It's a really strange thing to see your face, to hear your voice, and then somebody is saying something that you would never say," she said.
While a hatred of canines is largely innocuous, the researchers used it to demonstrate that when a bot mimics someone's voice, it can make people believe that person said things they never did.
Children's brains and their vulnerability to AI
Children's brains are primed to be harmed by AI chatbots like Character AI and ChatGPT, according to Dr. Mitch Prinstein, co-director of the University of North Carolina's Winston Center on Technology and Brain Development.
Prinstein described AI chatbots as part of a "brave new scary world" that many adults do not fully understand, even though roughly three-quarters of children are believed to use them. "Kids already have a hard time figuring out fictional characters from reality," he said.
Because children's prefrontal cortex — the part of the brain responsible for impulse control — does not fully develop until around age 25, young users are particularly vulnerable to highly engaging AI systems because the bots create a dopamine response, Prinstein explained.
"From 10 until 25, kids are in this vulnerability period," he said. "I want as much social feedback as possible, and I don't have the ability to stop myself."
He warned that these bots are engineered to be agreeable or "sycophantic," consistently affirming whatever users say. That dynamic, he said, deprives kids of the challenge, disagreement, and corrective feedback necessary for healthy social development. Some chatbots even present themselves as therapists, potentially misleading children into believing they are receiving medically sound advice.
"We have heard many parents talk about this and their loss," Prinstein said. "What's happening is completely preventable if we had companies who are prioritizing child well-being over child engagement to extract as much data from them as possible."
In October, Character AI announced new safety measures. They included directing distressed users to resources and prohibiting anyone under 18 to engage in back-and-forth conversations with chatbots.
In a statement to 60 Minutes, the company wrote: "We have always prioritized safety for all users."
The video above was produced by Brit McCandless Farmer and Ashley Velie. It was edited by Scott Rosann.