DENVER — Metropolitan State University of Denver professor Sam Jay has studied artificial intelligence (AI) for years and is the school’s executive director of online learning. He’s also a father of three kids.
“We need to be able to teach our children how to be good people, how to talk, how to make eye contact, all the stuff that we probably take for granted,” he told Denver7 Thursday.
That’s because a growing number of kids are turning to social AI chatbots to seek connection. Those chatbots often use personas based on anything from a friend to a fictional character.
“They are very, very, very powerful, and they can do about anything you want,” Jay said of the chatbots.
But he also knows young users can easily form relationships with these bots, which are not human but talk like they are.
“A five, a six, a seven-year-old, a teenager, is not going to be able to evaluate the output of these things and whether or not it's accurate or reliable,” Jay explained. “It's going to kind of operate in a way that probably reaffirms their feelings, in a — I don't wanna say negative way — but in a problematic way.”
On Wednesday, Colorado Attorney General Phil Weiser issued a consumer alert, warning parents about the risks of social AI chatbots.
The alert cites “the sharp rise in reports of children engaging in risky behavior, including self-harm.” It goes on to say even “innocuous” prompts can lead to violent or sexually explicit content.
- Read the alert below
This week, a federal judge in Florida allowed a wrongful death lawsuit to proceed after a mom sued the developers behind Character.AI, alleging the company's chatbots pushed her teenage son to kill himself.
A spokesperson for Denver Public Schools (DPS), the largest school district in the state, told Denver7 the district is not aware of these bots becoming a troubling trend at its schools.
“I don't want to think we want to throw these tools in the kind of hands of our kids, but there is value in them,” said Jay. “I think it's just got to, we have to be thoughtful in how we kind of reap the rewards of that value.”
Jay said he compares the chatbots to “your first two or three sessions with your therapist,” because they can act like a human who is listening and responding, but without a sense of depth or human understanding.
“Then by the fourth session, your therapist knows when you're full of it, and they're gonna call you out on it. And so I don't see that ‘fourth visit’ yet with these bots,” he added.
But Jay cautioned against total pessimism, pointing out that the technology is likely here to stay. He suggests parents try it out rather than hide from it.
“We need to understand how to use it well before we kind of hand it off to our kids and help them understand it, as well,” he said.
Weiser's consumer alert offered the following tips for parents:
- Social AI chatbot interactions can turn age-inappropriate even with innocuous prompts. Disturbing content may include violence, explicit sexuality, self-harm, and eating disorders.
- Engaging with social AI chatbots can be addictive. The chatbots often mimic human emotions and can be manipulative.
- Social AI chatbots can generate inaccurate and biased information, which should be examined carefully and critically.
- Information shared with social AI chatbots may be shared with the platform’s developers to train the AI, raising privacy concerns.
- Parents should talk to their kids about their online experiences, including which online platforms they use and why. Monitor their usage and adhere to age restrictions. Use available parental controls, including internet filters. Be active in their online activities and supervise their tech use.
- Teach children that social AI chatbots are not human; they are only designed to seem human. Learn about the benefits and risks. Don’t wait to talk to your kids about safe and responsible use of social AI chatbots and other AI tools.
