Five Signals — Five Responses

What The
Data Says
About Kids
& AI.

AI is now part of adolescent life. 1 in 8 teens already uses it for mental health advice. No one designed these tools for that use case. Here is what the data shows — and what should change.

Every number on this site comes from a primary source. No claims are made without citation.

CDC · RAND · JAMA · OpenAI · WHO · 2023–2026
40%
of U.S. high school students reported persistent sadness or hopelessness in 2023 — up from 30% in 2013.
CDC YRBSS 2023
1 in 8
U.S. adolescents and young adults now use AI chatbots for emotional or psychological support.
RAND / JAMA, Nov 2025
20%
of U.S. high school students seriously considered attempting suicide in 2023. Nearly 1 in 10 attempted.
CDC YRBSS 2023
30%
higher odds of moderate depression among adults who use AI chatbots daily, per a 20,000-person U.S. study.
Harvard Kennedy School, Jan 2026
Signals — Data & Response Finding · Principle
01
Screen Time

Screen Time & Symptoms

Teens with four or more hours of daily screen time were significantly more likely to report anxiety symptoms (27.1%) and depression symptoms (25.9%) in the preceding two weeks than those with less daily exposure. This data comes from the CDC's National Health Interview Survey, covering July 2021 through December 2023 — over 100,000 teen observations.

CDC National Health Interview Survey — Teen, Oct 2024
Response

Build Screen-Time Awareness Into Consumer AI Products

AI applications used by minors should surface daily usage time in a clear, non-punitive way. Visibility is the first step to choice. Users who can see their pattern are better equipped to change it — particularly when they are still developing the capacity for self-regulation.

02
AI as Therapist

Teens Are Using AI as a Therapist

A RAND Corporation study published in JAMA Network Open (November 2025) surveyed 1,000+ young people aged 12–21. 13.1% reported using AI tools for emotional or psychological support. Among 18–21 year olds, that figure rose to 22%. Two-thirds engaged monthly or more. Over 90% found the advice helpful — despite no AI tool being clinically validated for this use.

RAND Corporation / JAMA Network Open, Nov 2025
Response

Distinguish Support from Care — Every Time

AI products should make a consistent, clear distinction between supportive conversation and clinical care. This is not a disclaimer buried in terms of service. It should appear contextually, in plain language, whenever emotional topics arise — and should always include a pathway to professional help.

03
Sadness Trend

A Decade-Long Decline

The CDC's YRBSS Data Trends Report covering 2013–2023 documents a consistent worsening of adolescent mental health across every major indicator. Persistent sadness: +10 percentage points over ten years. Suicide consideration: up. Suicide attempts: up. The steepest declines happened between 2019 and 2021 — precisely when digital and AI-adjacent tool adoption accelerated globally.

CDC YRBSS Trends Report 2013–2023, Aug 2024
Response

Mandate Longitudinal Safety Research Before Youth Deployment

Consumer AI products designed for or likely to be used by minors should be required to conduct and publish longitudinal safety research — not post-hoc. The teen mental health crisis predates AI, but the pace of AI deployment into adolescent life demands we do not repeat the mistakes made with social media. Research first. Scale second.

04
Social Media

Frequent Social Media Use & Risk

The CDC's October 2024 analysis of 2023 YRBSS data found that 77% of high school students used social media frequently. Frequent use was associated with higher rates of bullying victimization, persistent sadness, and suicidal ideation — across all demographic groups. The association held even after controlling for sex and sexual identity. Social media is not AI — but it is the clearest prior case study we have.

CDC MMWR / YRBSS Analysis, Oct 2024
Response

Apply the Social Media Lesson — Before It's Too Late

Social media scaled for a decade before its effects on youth were taken seriously. The data has been available since at least 2017. AI companies have that data now, in advance. Applying age-appropriate design, usage limits, and emotional pattern monitoring proactively — not reactively — is the difference between responsibility and regret.

05
Industry Response

The Industry Is Starting to Move

In November 2025, OpenAI released its Teen Safety Blueprint — a framework for age-appropriate AI design including default U18 safety policies, parental controls, quiet hours, and age estimation tools. The WHO issued three formal recommendations for responsible AI use in mental health in March 2026. These are meaningful first steps. They are not yet industry standards.

OpenAI Teen Safety Blueprint, Nov 2025 · WHO, Mar 2026
Response

Voluntary Frameworks Must Become Baseline Expectations

A blueprint published by one company is a signal, not a standard. The principles in OpenAI's Teen Safety Blueprint and the WHO's March 2026 recommendations should become the minimum baseline for any AI product used by or likely to reach minors — enforced not only through policy, but through co-design with the communities most affected.

The Data
Exists.
The Design
Doesn't.

Every number on this page was available to every AI company building products used by young people. The question is not whether we know. The question is whether knowing is enough to act — or whether, like social media before it, the industry will wait for the damage to be undeniable before it changes course.