Series: AI Chatbots and Youth Mental Health—Part 2 of a Two-Part Series
By Tishe Daini
challenged society’s rose-tinted view of AI chatbots and their role in the lives of young people.
Through statistical evidence, disturbing anecdotal accounts, and real-world case studies, the piece highlighted critical shortcomings of these AI conversationalists in terms of the safety and well-being of vulnerable teen users.
Among the most pressing of these dilemmas are the systematic exploitation of users by commercial AI platforms, unhealthy teen-AI attachment formation, and harmful AI sycophancy, alongside several other concerns.
In turn, we’ve proposed a three-step, multi-level plan to mitigate evident risks and realize the full benefits of AI chatbot technology. Here they are:

Step 1: Youth-centred AI Fiduciary Framework
To address existing regulatory gaps in current legislation, such as the EU Artificial Intelligence Act, this proposal introduces a legal mandate that platforms and developers owe a fiduciary duty to their young users. Such a guideline would force stakeholders in the AI industry to prioritize user well-being over profit-generating efforts that often expose vulnerable users to psychological harm. Take, for instance, problematic compulsive use patterns due to engagement-driven design choices.
In practice, this would require companies to:
- Perform independent youth risk analyses of their AI systems before releasing their products.
- Institute automatic intervention and escalation procedures for high-risk user queries.
- Agree to frequent, periodic checks by an independent AI safety oversight authority.
Following suit with this framework, the risk classification of AI chat systems must be updated to reflect current risks for harm. Additionally, while implementing the mandate will require significant international effort, enforceable fines, key transparency obligations, and third-party screenings would create a strong accountability structure.
If properly implemented, success would be measured by reduced AI overdependency rates and observed compliance among major AI providers.

Step 2: Standardized AI Interface Overhaul and Remediation Initiative
In accordance with the youth-first AI fiduciary mandate mentioned earlier, an industry-wide revamp of AI system interfaces must be enacted. Such a revamp would require non-essential, gamified, or overstimulating AI design elements to be assessed by a certified, research-backed, third-party authority for harm risks (see the University of Sydney’s Rapid Analysis of Character.AI; Blake et al., 2025).
These screenings would then trigger regulatory actions such as increasing fines for non-compliant AI platforms, temporary suspensions for severe breaches, and even operating license revocations for repeated unrectified breaches.
Implementation of this step involves:
- The formation of AI user protection bodies (funded by all commercial AI providers, governments, and relevant sponsors), guided by the fiduciary mandate, spread across participating countries with active support from local law enforcement authorities.
- Periodic, third-party AI interface/design screenings and strict remediation requirements for AI platforms that are flagged by AI user protection entities.
- Continued adolescent-focused research on digital dependency patterns, parasocial attachment formation, and vulnerability to persuasive AI interface setups.
While significant funding and collective AI industry efforts are certainly required for this initiative, the potential reduction of current burdens on strained youth mental health services and support systems confirms the value of this large-scale undertaking.
Notably, success would be achieved when the rates of flagged AI interface-related harms decline as major platforms establish independent, internal user interface compliance verifications and mediation infrastructures.

Step 3: Nationally Integrated AI Literacy and Risk Awareness Programs
To reduce harmful teen-AI interactions, launch national AI literacy and risk awareness programs, formally integrated at various levels of society (community organizations, health care centres, libraries, schools, etc.) and backed by the ministries of education and public safety, or equivalent local bodies.
Predominantly, these programs would provide AI literacy and risk identification workshops in schools at all levels, as well as similar mandated courses for professionals across diverse fields (educators, healthcare workers, business managers, etc.). Finally, trained volunteers would sustain the initiative with a paid board of directors overseeing program operations.
Implementation steps for participating nations:
- Cooperation between federal and provincial/state governments to establish start-up chapters of these programs in multiple high-risk locations.
- Government departments responsible for labour markets and social programs must build a sufficiently trained volunteer workforce with a recognized certification pipeline to aid the establishment of regional program chapters.
- The development of structured program funding systems supported by public grants, private sponsors, and partnering organizations, who will be incentivized through potential brand alignment with shared communal values of youth safety and well-being.
- Previously referenced AI user protection bodies must collaborate with program stakeholders to hire independent AI risk specialists and academic evaluators to support success metrics.
While this project requires extensive volunteer participation and coordination, people will be incentivized to get involved through volunteer appreciation initiatives, certified training credentials with cross-disciplinary applicability, and opportunities to find a sense of purpose through communal impact.
With this plan, key success indicators include an increasing annual percentage of working professionals and school students who are certified as “AI Literate” through periodic, publicly available literacy assessments.
Additionally, success will rely on the measured sustainability (i.e. an increasing volunteer retention rate + maintained funding) of start-up chapters in high-risk regions.

Coordination among these three structured interventions (legally binding fiduciary mandate, industry-wide design reforms, and long-term societal literacy programs) presents practical strategies to tackle youth harm while optimizing AI’s benefits.
“Coordination among these three structured interventions presents practical strategies to tackle youth harm.”
Ultimately, these proposed solutions remind us that although AI chatbots currently pose real risks to youth populations, successful remedies are at hand. With consistent effort, cooperation, and accountability, we can implement measurable, long-lasting improvements to our coexistence with artificial intelligence.
If you found our perspective valuable, consider sharing it with your friends, educators, and even policymakers. Additionally, feel free to leave a comment down below if you have any burning questions or opinions regarding this topic. Continued dialogue is the critical first step to creating a safer digital environment for our youth.
References and Further Readings
- European Parliament & Council. (2024). Regulation (EU) 2024/1689[— Artificial Intelligence Act (Art. 50). Official Journal of the European Union. Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689&qid=1772039982966.
- Blake, Carter, M., & Velloso, E. (2025). Rapid analysis: Character.AI and Children (Rapid Analysis). University of Sydney. Retrieved from https://ses.library.usyd.edu.au/bitstream/handle/2123/33844/Rapid%20Analysis_Character%20AI.pdf?sequence=2&isAllowed=y
- Pew Research Center. (2025). Teen, Social Media and AI Chatbots 2025. Retrieved from https://www.pewresearch.org/internet/2025/12/09/teens-social-media-and-ai-chatbots-2025/#frequency-of-chatbot-use
- Huang, Lai, X., & Yajun Li, L. (2024). AI Technology panic—is AI Dependence Bad for Mental Health? A Cross-Lagged Panel Model and the Mediating Roles of Motivations for AI Use Among Adolescents. Psychology Research and Behavior Management, volume 17, pages 1087 – 1102. https://doi.org/10.2147/PRBM.S440889#d1e249
Acknowledgements
Thank you to Yide-Abasi Essien for reviewing the draft and offering helpful edit suggestions.


Leave a Reply