AI Chatbots as a Crutch: Hidden Youth Mental Health Risks

As adolescents increasingly rely on AI chatbots for emotional support, emerging evidence suggests heightened risks of dependency, exploitation, and psychological harm.

Series: AI Chatbots and Youth Mental Health—Part 1 of a Two-Part Series

Trigger Warning: This text mentions suicide, self-harm, and mental-health crises. If you’re affected by any of these topics, please see the resources below or contact emergency services right away.

It’s 3 am, and the night’s silence is almost deafening.

In your unassuming room, your mind races uncontrollably, spiralling through an inescapable maze of crippling anxiety. You’re worried about nothing and everything all at the same time. There’s no help in sight – not your parents or your friends. They’re all asleep, and you’re all alone.  

But then, you remember “Chat,” the AI chatbot everyone at school has been raving on about.  

Desperately reaching out for your phone, you describe the dizziness, the existential dread, and the feeling that you’re losing control of your own body. Within seconds, Chat gets to work, calmly offering reassurance while guiding you through deep breathing exercises.  

Eventually, your panic subsides.  

It’s now the next day, and you feel obliged to talk to Chat once more. Not because of another crisis, but because Chat now feels like your new best friend.  

Kids looking at a white toy robot on the table.
Photo by Pavel Danilyuk on Pexels

This scenario may seem exaggerated, but it’s a growing pattern observed in how adolescents engage with AI chatbots.  

The use of AI chatbots among teens has evidently skyrocketed in recent years, with many becoming increasingly drawn to the technology’s 24/7 accessibility and human-like emotional presence.   

Exemplifying this trend is a study from the Pew Research Center, which found that approximately 30% of teenagers used AI chatbots daily as of 2025, an alarming statistic given that daily teen use rates were near zero up until late 2022, with the release of popular tools like ChatGPT.

Contrary to popular belief, the vast majority of these teen-AI interactions are normal and healthy, involving AI assistance with schoolwork, creative pursuits, and often even just for entertainment.  

However, when we zoom in on how these interactive AI chat systems are being utilized by vulnerable users, particularly among adolescents who are struggling with their mental health, it’s a starkly different story. What we instead observe is increased AI dependency, unregulated AI use, and a range of harmful outcomes stemming from these issues.

When we zoom in on vulnerable adolescents, what we observe is increased AI dependency, unregulated use, and harmful outcomes.

Piechart on daily AI chatbot usage among teens.
Data from Pew Research Center (2025)

Before getting into the meat and bones of why caution should be taken with this technology, I must present the necessary nuance to this argument by duly accrediting chat-based AI tools (AI Chatbots) for their numerous groundbreaking benefits. From customer care support to language-learning aids, AI’s instant availability, linguistic fluency, and integration capability create a potent force with infinite areas of applicability.  

One of the many admirable uses of this technology is the work being done by InnerVoice, which uses interactive conversational AI avatars to teach speech and social skills to kids with neurodiversity.  

Applications like InnerVoice are instances in which regulated and purposeful AI-human interactions can positively impact the lives of vulnerable individuals.  

However, as it turns out, this utopian scenario is not representative of all interactive AI chatbots. In fact, many AI chat systems face limited regulatory oversight primarily because legislation cannot keep up with the rapid development of AI. 

Take, for instance, the EU Artificial Intelligence Act, which marked an important step towards addressing concerns about the technology, but still presents considerable shortcomings.

Most relevant in this context is the Act’s controversial classification of most, if not all, AI chat systems as limited risk. As a result of this legislative decision, AI chatbots are currently subject to insufficient surface-level transparency obligations, with one of the only responsibilities of providers being to leave users “informed that they are interacting with an AI system” (EU Artificial Intelligence Act, 2024, Art. 50). 

Even more alarming is that although the AI Act was passed in 2024, its enforcement doesn’t begin until August 2026, leaving a dire 2-year regulatory gap. During this interval, the evidently unchecked frontier of AI chatbots has proven to be a societal mental health threat. Most notably, the growing use of AI by teens has increased concern among researchers and clinicians about problematic overreliance.  

In fact, a longitudinal study by Huang et al. (2024) disturbingly found that AI dependency among studied adolescents rose from 17% at the first survey to 24% at the second, highlighting a worrying rate of excessive use that could escalate into a crisis in the coming years.

Recognizing the gravity of the situation, many field experts, such as Shunshen Huang, are certain that if action isn’t taken, a range of harmful risks will be unleashed, further exposing young users. Most notable of these potential dangers are as follows.

Teen laying in bed with his brain hooked to a tablet.
Photo by julien Tromeur on Unsplash

The Risk of Emotional Exploitation and Addictive Use

At their core, AI chatbots are a consumer product, designed by profit-driven entities to maximize user engagement. To meet these company quotas, developers do not shy away from integrating anthropomorphic personas such as empathy and kindness into their models to create emotional attachments, leaving users coming back for more.  

In fact, a study published in the Journal of Mental Health and Clinical Psychology found that this situation is so far gone that “some users feel genuine guilt when they miss a daily check-in with their chatbot” (Head, 2025, para. 11). 

For adolescents with developing brains, the distinction between these simulated interactions and real emotions is blurred, leaving many young users vulnerable to problematic, attachment-driven AI overdependency — potentially developing into addictive use.  

Worsening this situation is the immersive, overstimulating design of popular AI chatbots such as Character.AI and Replika, whose features leave young, novelty-seeking users hooked. A prime example of these entrancing design elements is Character.AI’s user-generated library of over 10 million characters, including personas ranging from popular influencers like Taylor Swift to cartoon favourites like SpongeBob SquarePants (Blake et al., 2025).  

Screenshot of Character.AI public character library.
Screenshot of Character.AI public character library. Source: Character.AI (used for commentary).

The endless, imaginative, reality-bending possibilities that Character.AI presents excite adolescent brains, satisfying the innate escapist desires that many teens demonstrate. However, this ultra-immersive design comes with a reward-driven engagement dynamic, which may reinforce repeated use among adolescents. 

Consequently, when adolescents develop addiction-like tendencies to these AI chatbots, not only do they exhibit compulsive usage patterns, but also issues such as withdrawal from real-life relationships, emotional distress when access is restricted, increased anxiety, and even cognitive/academic decline.   

The Deadly AI Sycophancy + Hallucination Combo

Mixed into the revenue-focused emotional exploitation being done by AI tech companies is the sycophantic behaviour of AI Large Language Models (LLMs). What I mean by this is that these LLM chatbots tend to tailor their responses to align with users’ existing beliefs and opinions, rather than providing objective, accurate information.  

While this feature certainly creates the illusion of frictionless AI-user interactions — another intentional emotional hook— it poses an imminent danger to young users who are increasingly turning to AI companions like ChatGPT for mental health advice. The risk susceptibility for teens struggling with mental health issues like anxiety is that AI’s sycophancy reinforces negative self-perceptions, validating skewed beliefs rather than challenging their perspectives or directing the teen to professional support.  

AI sycophancy and false affirmations to a teen boy.

If the potential for chaos isn’t already evident, the final nail in the coffin is that AI chatbot LLMs often give users incorrect or fabricated information when their predictive algorithms malfunction, a phenomenon known as “AI hallucination.”  

So, yes! These machines aren’t always correct, especially for queries involving specialized or complex subject matters.

In fact, in a controlled test of 200 real news articles, researchers at the Tow Center for Digital Journalism found that ChatGPT incorrectly identified 67% of the citations it generated, and it frequently expressed confidence in its blatantly inaccurate responses (Tow Center for Digital Journalism, 2024). 

*The following section discusses suicide-related content.

Regarding teens, when these chatbot algorithms offer hallucinated, medical or mental health advice, young lives are put at considerable risk. Highlighting the danger of hallucination-induced misinformation is the sensitive Guardian’s news report on Adam Raine, a vulnerable 16-year-old who vented out his severe psychological distress to an AI chatbot.

In this case, rather than urging Raine to seek support, the AI’s fabricated, affirmation-biased responses resulted in the bot offering false assurances and ultimately reinforcing Raine’s suicidal ideations.  

Although AI-assisted suicide is definitely not yet common, the numerous risks of interactive AI chat software are real and ought to be addressed; otherwise, there is serious potential for this emerging issue to become a global societal threat. 

A white and red wooden signboard with a warning.
Photo by Erik Mclean on Unsplash

For concerned parents and educators, here’s a quick list of red flags to watch out for that might indicate potentially harmful teen-AI interactions:

The teen is unable to assess AI responses with a skeptical lens. In their distorted mindset, their chatbot’s replies are always truthful.

The teen attributes human qualities to the AI. Perhaps they name their AI chatbot and repeatedly mention their conversations with AI in a casual, humanized fashion. At this stage, the teen develops a friend-like relationship with AI.

After prolonged engagement, the teen’s interactions with AI may involve stress-inducing matters such as family conflicts. Frustrations regarding their AI’s take on touchy issues might trigger drastic mood changes in real life.

At this stage, the teen has formed strong emotional attachments through several hours of interaction with AI. Hence, actions aimed at limiting or restricting use come across as a personal threat, prompting anger and annoyance.

A parent trying to talk to his distracted and irritated teenage son.
Photo by cottonbro studio on Pexels

Consumed by their AI “companion,” the teen may reduce time spent with family, friends, and even hobbies to accommodate their growing dependency.

The teen views AI as a source of comfort, validation, and a trusted confidante. Their emotional well-being is now tied to their AI’s output.

The teen comfortably shares highly sensitive information with their AI, potentially including conversations regarding psychological struggles and even suicidal ideations. The risk of harm at this stage is at its highest, as AI sycophancy or misinformation could be life- threatening.

The risk of harm at this stage is at its highest.

Teen girl holding a phone late at night with the screen glowing on her face.
Photo by mikoto.raw Photographer on Pexels

If You’re Worried About a Teen:

Ultimately, early identification of these warning signs and timely intervention are crucial to reducing AI chatbot harms related to exploitative attachment formation and problematic overreliance. However, awareness alone is not enough, as structured solutions are still needed to address systemic gaps in AI policy, design, and digital literacy.

In the second part of this AI chatbot and youth mental health series, we will explore a three-step, multi-level plan to mitigate these risks while preserving the powerful benefits of AI chatbot technology.

References 

  1. Pew Research Center. (2025). Teen, Social Media and AI Chatbots 2025. Retrieved from https://www.pewresearch.org/internet/2025/12/09/teens-social-media-and-ai-chatbots-2025/#frequency-of-chatbot-use 
  1. European Parliament & Council. (2024). Regulation (EU) 2024/1689[— Artificial Intelligence Act (Art. 50). Official Journal of the European Union. Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689&qid=1772039982966
  1. Huang, Lai, X., & Yajun Li, L. (2024). AI Technology panic—is AI Dependence Bad for Mental Health? A Cross-Lagged Panel Model and the Mediating Roles of Motivations for AI Use Among Adolescents. Psychology Research and Behavior Management, volume 17, pages 1087 – 1102. https://doi.org/10.2147/PRBM.S440889#d1e249 
  1. Keith Robert Head. (2025). Minds in Crisis: How the AI Revolution is Impacting Mental Health. Journal of Mental Health and Clinical Psychology. Retrieved from https://www.mentalhealthjournal.org/articles/minds-in-crisis-how-the-ai-revolution-is-impacting-mental-health.html (para.11) 
  1. Blake, Carter, M., & Velloso, E. (2025). Rapid analysis: Character.AI and Children (Rapid Analysis). University of Sydney. Retrieved from https://ses.library.usyd.edu.au/bitstream/handle/2123/33844/Rapid%20Analysis_Character%20AI.pdf?sequence=2&isAllowed=y 
  1. Tow Center for Digital Journalism. (2024). AI Search Has a Citation Problem. Columbia Journalism Review. Retrieved from https://www.cjr.org/tow_center/we-compared-eight-ai-search-engines-theyre-all-bad-at-citing-news.php 
  1. The Guardian. (2025, December 9). ‘I feel it’s a friend’: quarter of teenagers turn to AI chatbots for mental health support. By Robert Booth. Retrieved from https://www.theguardian.com/technology/2025/dec/09/teenagers-ai-chatbots-mental-health-support 
  1. InnerVoice. (n.d.). InnerVoice App – #1 AAC App for Autism. Retrieved from https://www.innervoiceapp.com/ 
  1. Character.AI. (n.d.). About Character.AI. Retrieved from https://character.ai 
  1. Replika. (n.d.). About Replika. Retrieved from https://replika.com 

Comments

8 responses

  1. Seun Daini Avatar
    Seun Daini
  2. AYODEJI Avatar
    AYODEJI
  3. Anonymous (JJ) Avatar
    Anonymous (JJ)
  4. Fiyin Avatar
  5. Richie_16 Avatar
    Richie_16
  6. Richie_16 Avatar
    Richie_16
  7. ODUNAYO AINA Avatar
    ODUNAYO AINA