AI Chatbots as a Crutch: Practical Solutions to Protect Teens

Protecting teens from harmful AI chatbots requires more than just awareness. This article outlines a three-step, multi-level plan to reform AI policy, design, and literacy systems.

Two teenage girls smiling while using a smartphone together.

Series: AI Chatbots and Youth Mental Health—Part 2 of a Two-Part Series 

By Tishe Daini 

Part 1 of this AI and Youth Mental Health Series challenged society’s rose-tinted view of AI chatbots and their role in the lives of young people. 

Through statistical evidence, disturbing anecdotal accounts, and real-world case studies, the piece highlighted critical shortcomings of these AI conversationalists in terms of the safety and well-being of vulnerable teen users. 

Among the most pressing of these dilemmas are the systematic exploitation of users by commercial AI platforms, unhealthy teen-AI attachment formation, and harmful AI sycophancy, alongside several other concerns.

In turn, we’ve proposed a three-step, multi-level plan to mitigate evident risks and realize the full benefits of AI chatbot technology. Here they are: 

Infographic listing three steps to protect teens from harmful AI chatbots: a youth-centred framework, standardized interface changes, and national AI literacy programs.

Step 1: Youth-centred AI Fiduciary Framework

To address existing regulatory gaps in current legislation, such as the EU Artificial Intelligence Act, this proposal introduces a legal mandate that platforms and developers owe a fiduciary duty to their young users. Such a guideline would force stakeholders in the AI industry to prioritize user well-being over profit-generating efforts that often expose vulnerable users to psychological harm. Take, for instance, problematic compulsive use patterns due to engagement-driven design choices.  

In practice, this would require companies to:  

Following suit with this framework, the risk classification of AI chat systems must be updated to reflect current risks for harm. Additionally, while implementing the mandate will require significant international effort, enforceable fines, key transparency obligations, and third-party screenings would create a strong accountability structure.  

If properly implemented, success would be measured by reduced AI overdependency rates and observed compliance among major AI providers.  

Screenshot of a Merriam-Webster dictionary entry for “fiduciary,” showing it as an adjective meaning relating to trust or confidence.
Definition of ‘fiduciary.’ Source: Merriam-Webster.

Step 2: Standardized AI Interface Overhaul and Remediation Initiative

In accordance with the youth-first AI fiduciary mandate mentioned earlier, an industry-wide revamp of AI system interfaces must be enacted. Such a revamp would require non-essential, gamified, or overstimulating AI design elements to be assessed by a certified, research-backed, third-party authority for harm risks (see the University of Sydney’s Rapid Analysis of Character.AI; Blake et al., 2025). 

These screenings would then trigger regulatory actions such as increasing fines for non-compliant AI platforms, temporary suspensions for severe breaches, and even operating license revocations for repeated unrectified breaches.

Implementation of this step involves:  

While significant funding and collective AI industry efforts are certainly required for this initiative, the potential reduction of current burdens on strained youth mental health services and support systems confirms the value of this large-scale undertaking.  

Notably, success would be achieved when the rates of flagged AI interface-related harms decline as major platforms establish independent, internal user interface compliance verifications and mediation infrastructures.  

Illustration of a compliance reviewer and a teen using a laptop, with shields, warning icons, checklists, and a gavel symbolizing online safety, moderation, and regulation.

Step 3: Nationally Integrated AI Literacy and Risk Awareness Programs

To reduce harmful teen-AI interactions, launch national AI literacy and risk awareness programs, formally integrated at various levels of society (community organizations, health care centres, libraries, schools, etc.) and backed by the ministries of education and public safety, or equivalent local bodies. 

Predominantly, these programs would provide AI literacy and risk identification workshops in schools at all levels, as well as similar mandated courses for professionals across diverse fields (educators, healthcare workers, business managers, etc.). Finally, trained volunteers would sustain the initiative with a paid board of directors overseeing program operations.  

Implementation steps for participating nations:  

While this project requires extensive volunteer participation and coordination, people will be incentivized to get involved through volunteer appreciation initiatives, certified training credentials with cross-disciplinary applicability, and opportunities to find a sense of purpose through communal impact.  

With this plan, key success indicators include an increasing annual percentage of working professionals and school students who are certified as “AI Literate” through periodic, publicly available literacy assessments. 

Additionally, success will rely on the measured sustainability (i.e. an increasing volunteer retention rate + maintained funding) of start-up chapters in high-risk regions.

Students in a classroom using desktop computers during a computer lab session, with two girls in the foreground.
Photo by Thành Đỗ on Pexels

Coordination among these three structured interventions (legally binding fiduciary mandate, industry-wide design reforms, and long-term societal literacy programs) presents practical strategies to tackle youth harm while optimizing AI’s benefits. 

“Coordination among these three structured interventions presents practical strategies to tackle youth harm.”

Ultimately, these proposed solutions remind us that although AI chatbots currently pose real risks to youth populations, successful remedies are at hand. With consistent effort, cooperation, and accountability, we can implement measurable, long-lasting improvements to our coexistence with artificial intelligence. 

If you found our perspective valuable, consider sharing it with your friends, educators, and even policymakers. Additionally, feel free to leave a comment down below if you have any burning questions or opinions regarding this topic. Continued dialogue is the critical first step to creating a safer digital environment for our youth.

Read Part 1: Hidden Youth Mental Health Risks

References and Further Readings

  1. European Parliament & Council. (2024). Regulation (EU) 2024/1689[— Artificial Intelligence Act (Art. 50). Official Journal of the European Union. Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689&qid=1772039982966.  
  1. Blake, Carter, M., & Velloso, E. (2025). Rapid analysis: Character.AI and Children (Rapid Analysis). University of Sydney. Retrieved from https://ses.library.usyd.edu.au/bitstream/handle/2123/33844/Rapid%20Analysis_Character%20AI.pdf?sequence=2&isAllowed=y 
  1. Pew Research Center. (2025). Teen, Social Media and AI Chatbots 2025. Retrieved from https://www.pewresearch.org/internet/2025/12/09/teens-social-media-and-ai-chatbots-2025/#frequency-of-chatbot-use 
  2. Huang, Lai, X., & Yajun Li, L. (2024). AI Technology panic—is AI Dependence Bad for Mental Health? A Cross-Lagged Panel Model and the Mediating Roles of Motivations for AI Use Among Adolescents. Psychology Research and Behavior Management, volume 17, pages 1087 – 1102. https://doi.org/10.2147/PRBM.S440889#d1e249 

Acknowledgements

Thank you to Yide-Abasi Essien for reviewing the draft and offering helpful edit suggestions.

Comments

10 responses

    1. Anonymous (Jj’s friend Jaxon) Avatar
      Anonymous (Jj’s friend Jaxon)
  1. Michael.A Avatar
    Michael.A
  2. Anonymous (JJ) Avatar
    Anonymous (JJ)
  3. Jaxon Avatar
  4. Andy Avatar
  5. Anonymous (Jj’s friend Jaxon) Avatar
    Anonymous (Jj’s friend Jaxon)
  6. Anonymous (Mateo) Avatar
    Anonymous (Mateo)
  7. Seun Daini Avatar
    Seun Daini
  8. Yasin Avatar

Leave a Reply

Your email address will not be published. Required fields are marked *