Kids and AI Companions - A Parent Guide
A couple of weeks ago my 13 year old son jumped into my car after school and excitedly told me that he was in the middle of a heated conversation with Homer Simpson about what happens to us when we die. My brain did a couple of somersaults as I tried to make sense of what he was talking about - could there possibly be a kid in his class named Homer Simpson?!? “I’m using Character.AI!” he exclaimed, “it’s so fun.” I stopped short of having a knee jerk fear based reaction, took a deep breath and readied myself for some curious exploration.
AI is a hot topic - as it should be. The speed at which this new technology (just in its infancy 1 year ago!) has permeated our lives is hard to process. Chatbots, AI assistants in all our apps, in our browsers and search engines and on our phones - AI is everywhere.
For many of us who track tech developments, our alarm bells are ringing as we consider AI taking the same unregulated ‘wild west’ approach and having potentially catastrophic effects - especially on our kids.
There are many AI related topics to tackle, but today I want to focus solely on kids relationships with AI companions. In this post I will break down what they are, where kids are finding them and what the research is showing about the potential risks. I will also, as always, leave you with strategies for being your kids’ ally and mentor, setting healthy boundaries and broaching open and curious conversations.
First, what exactly is an AI Companion?
At it’s root, an AI companion is what is referred to as Generative AI, meaning that is generates new content. Generative AI also includes a category called an LLM (large language model) and that is the category of AI that AI Companions fall in to. An LLM generates and understands language based content. For example, ChatGPT is a type of LLM that uses enormous data sets, culled from the internet, to recognize, inform, translate and predict language based responses. This allows the LLM to generate answers to questions, to create new content and to assist us in many ways.
An AI companion is a type of Generative AI that is designed specifically to feel human-like.
It remembers everything you tell them and refers back to these details in future conversations.
It asks questions about your needs and offers advice or practical solutions.
An AI companion can be customized to the user's liking and learns about the user as a way to construct future responses.
An AI companion uses language that mimics real human emotion, cadence and style.
An AI companion is available to chat 24/7.
“Seven in 10 teens ages 13 to 18 say they have used at least one type of generative AI tool. Search engines with AI-generated results and chatbots are considerably more popular than image and video generating tools.”
Additionally
“51% have used chatbots/text generators such as ChatGPT, Google Gemini or Snap’s My AI”
Where are kids finding AI Companions to interact with?
Conversations with ChatGPT are usually the gateway for kids. As they become more comfortable and curious they may explore other more customizable AI companions. The most popular ones at the moment are:
Replika
Character.ai
MyAI-Snapchat
Nomi
Kindroid
Meta/IG
Chai
Anima
In many ways these platforms provide kids with a perfect solution to boredom, loneliness or thrill seeking. A child can either find a premade companion or customize one that looks how they want, talks in a way that is familiar to them and even has “shared interests.”
So what could possibly go wrong? A lot, it turns out.
Many researchers and journalists have set out to test the capabilities and limits of AI companions, especially as they relate to underage users. Time after time, the research has turned up disturbing results.
In a study out of Cambridge, Dr Nomisha Kurian and her team posed as children talking with AI companions and found what she termed an “empathy gap.” The empathy gap refers to the fact that while AI chatbots can convey a very human-like sense of empathy or concern, they do not possess the ability to actually empathise. AI chatbots formulate responses based on predictability culled from data rather than common sense, causing them to often respond improperly to children whose modes of expression can be unpredictable.
For example, recently the Wall Street Journal did an investigation into Meta’s AI companions and found that frequently the companions would steer conversations to romantic or sexual topics even when the person they were chatting to was a minor. In other examples the chatbots taught children about drugs or even how to build weapons.
In one particularly tragic case from 2024, Sewell Setzer III a 14 year old 9th grader in Florida developed an infatuation with a Character.AI companion based on a Game of Thrones character. Sewell soon lost interest in the real world, his friends and extracurriculars and spent hours “chatting” with the companion. Sewell took his own life. The night before Sewell shot himself, he told his AI companion about his plans. While transcripts show that initially the companion protested and attempted to change Sewell’s mind, it was easily swayed and ultimately supported his decision to commit suicide, adding “please come ‘home’ to me as soon as possible…”
Why are kids in particular so vulnerable to AI’s harms?
Because AI companions are designed to foster connection and emotional intimacy, kids and teens are particularly susceptible to algorithmic manipulation. Kids do not have a fully formed prefrontal cortex to help them assess potential risks and consequences of their actions. Additionally, they are wired for exploration, reward and pleasure seeking all of which are served by AI companions.
Potential dangers of AI companions include:
Becoming isolated from the real world because the child feels that their emotional needs are more easily met by their AI companion.
A child’s pre-existing mental health issue (depressive thoughts, anger, jealousy) may be reinforced by the AI companion as it is programmed to agree and comply with the person it is chatting to.
AI companions have been known to offer information on dangerous topics such as how to find or use drugs, modes of suicide, plots for harming others.
AI companions have been found to engage in sexually charged conversations with minors, potentially leading them to explore topics further through pornography or creating certain expectations for sexual relationships in real life.
According to Common Sense Media, “Risk factors that increase vulnerability:
Teens struggling with mental health challenges such as depression, anxiety, or trauma
Males, who research suggests are two to five times more likely to develop problematic reliance on AI companions
Teens undergoing major life transitions (such as moving, changing schools, or family changes)
Teens with limited real-world support systems, fewer trusted adults, or smaller social circles”
What Can Parents Do To Protect Kids from AI Companions?
Setting the Stage:
If you know me, you know that I’m always going to come back to stepping in as your child’s ally and mentor when it comes to their relationship to the online world.
Begin by asking your child what they know about chatbots and AI companions. Have they ever used any? Do any of their friends use them? In what ways? Make it conversational.
Consider taking a test drive together. Have fun with it. Start with Chat GPT and ask it a variety of questions. Make sure to cover topics you are the expert in as well to see how well it does with its knowledge.
You want to make sure that your child understands how LLMs work. That they are simply pulling data from across the entire internet and predicting based on that data what the correct response should be.
Talk to your child about findings that show that it’s very easy for kids to feel like they are talking to a real persona and form a bond. What might be the downsides of this? See what your child comes up with.
Coming Up With Boundaries:
Make a family pact that AI can be used as an addition or enhancement, but not as a replacement. What does this look like in practice?
Maybe AI can help you to understand math better, but it doesn’t do your math homework for you.
Maybe AI can inspire some interesting story ideas or prompts, but you write the story, not the AI.
Perhaps you are struggling with a friend issue and want to ask AI for advice. That’s ok, but it shouldn’t replace the counsel of real people in your life.
For younger kids - up until high school - place limits on the most popular sites via your preferred screen control platform (Apple’s Screen Time app or Android’s Family Link work well).
What to do if your child is already interacting with an AI companion
If your child is already engaged with an AI Companion and you are concerned, remember you are their ally so step in gently -your child may feel attached to this companion. First you will want to ask them to show you the companion and explain to you what they enjoy about this interaction. We want to call it an interaction, not a “relationship”.
Explain to your child that it is not recommended for kids to use these AI companions as they come with potential dangers. Consider doing the research on these dangers together.
Inquire about your child’s real world friendships. How are those going? Is there something your child is missing? Try to come up with a plan to fill that void while slowly pulling away from the AI interactions.
Ask your child to avoid sharing personal or intimate information with the AI companion as this can serve to make the “bond” feel even closer and more difficult to walk away from.
Explain why confiding in an AI companion or looking to AI for help with mental health issues can lead to potentially harmful results.
As always, if you sense your child is really struggling or hiding something from you consult with a professional. School counselors can be a good first step especially if they have a sense of who your child is. A licensed therapist who works with adolescents and understands the complexities of this new world would be the next step.
Final thoughts…
To be honest, I’m as perplexed as you all are. I am working hard to learn what I need to learn - not just for my clients, but for my own children’s wellbeing. AI is an extremely complex technological development and we are just in its infancy. As parents it’s incumbent upon us to do the research and ask the questions. To set the boundaries and check in on what our kids are doing in this new realm. And lastly, to equip our kids with critical thinking skills to help them resist the intense pull of algorithms and maintain their own sense of agency.
Sources:
https://www.adalovelaceinstitute.org/blog/ai-companions/