AI is now part of adolescent life. 1 in 8 teens already uses it for mental health advice. No one designed these tools for that use case. Here is what the data shows — and what should change.
Every number on this site comes from a primary source. No claims are made without citation.
Teens with four or more hours of daily screen time were significantly more likely to report anxiety symptoms (27.1%) and depression symptoms (25.9%) in the preceding two weeks than those with less daily exposure. This data comes from the CDC's National Health Interview Survey, covering July 2021 through December 2023 — over 100,000 teen observations.
AI applications used by minors should surface daily usage time in a clear, non-punitive way. Visibility is the first step to choice. Users who can see their pattern are better equipped to change it — particularly when they are still developing the capacity for self-regulation.
A RAND Corporation study published in JAMA Network Open (November 2025) surveyed 1,000+ young people aged 12–21. 13.1% reported using AI tools for emotional or psychological support. Among 18–21 year olds, that figure rose to 22%. Two-thirds engaged monthly or more. Over 90% found the advice helpful — despite no AI tool being clinically validated for this use.
AI products should make a consistent, clear distinction between supportive conversation and clinical care. This is not a disclaimer buried in terms of service. It should appear contextually, in plain language, whenever emotional topics arise — and should always include a pathway to professional help.
The CDC's YRBSS Data Trends Report covering 2013–2023 documents a consistent worsening of adolescent mental health across every major indicator. Persistent sadness: +10 percentage points over ten years. Suicide consideration: up. Suicide attempts: up. The steepest declines happened between 2019 and 2021 — precisely when digital and AI-adjacent tool adoption accelerated globally.
Consumer AI products designed for or likely to be used by minors should be required to conduct and publish longitudinal safety research — not post-hoc. The teen mental health crisis predates AI, but the pace of AI deployment into adolescent life demands we do not repeat the mistakes made with social media. Research first. Scale second.
The CDC's October 2024 analysis of 2023 YRBSS data found that 77% of high school students used social media frequently. Frequent use was associated with higher rates of bullying victimization, persistent sadness, and suicidal ideation — across all demographic groups. The association held even after controlling for sex and sexual identity. Social media is not AI — but it is the clearest prior case study we have.
Social media scaled for a decade before its effects on youth were taken seriously. The data has been available since at least 2017. AI companies have that data now, in advance. Applying age-appropriate design, usage limits, and emotional pattern monitoring proactively — not reactively — is the difference between responsibility and regret.
In November 2025, OpenAI released its Teen Safety Blueprint — a framework for age-appropriate AI design including default U18 safety policies, parental controls, quiet hours, and age estimation tools. The WHO issued three formal recommendations for responsible AI use in mental health in March 2026. These are meaningful first steps. They are not yet industry standards.
A blueprint published by one company is a signal, not a standard. The principles in OpenAI's Teen Safety Blueprint and the WHO's March 2026 recommendations should become the minimum baseline for any AI product used by or likely to reach minors — enforced not only through policy, but through co-design with the communities most affected.
Every number on this page was available to every AI company building products used by young people. The question is not whether we know. The question is whether knowing is enough to act — or whether, like social media before it, the industry will wait for the damage to be undeniable before it changes course.