OpenAI Data Reveals Weekly Crisis Signals From Over One Million Users as Experts Call for AI Safeguards
New research in CMAJ calls for urgent AI safeguards as OpenAI data shows 1.2 million users express suicidal ideation to chatbots every week.
By: AXL Media
Published: Apr 20, 2026, 8:22 AM EDT
Source: Information for this report was sourced from EurekAlert!

The Rise of Artificial Intelligence as a Digital Mental Health Companion
Conversational AI is rapidly emerging as a first point of contact for teenagers experiencing psychological distress, often reaching them before clinicians or family members are aware of a crisis. In a commentary published in the Canadian Medical Association Journal, Dr. Allison Crawford and Dr. Tristan Glatard highlight that the adoption of AI "companions" has turned into a pressing public health issue. As young users increasingly turn to these tools for mental health support, the authors argue that the safety and ethical design of these agents is no longer a peripheral concern but an urgent clinical necessity.
Staggering Volume of Crisis Disclosures Within Generative AI Platforms
The scale of the issue is underscored by recent data from OpenAI, which indicates that more than 1.2 million ChatGPT users across all age groups express suicidal ideation during their interactions every single week. This high volume of crisis signaling is particularly prevalent among younger demographics. A survey of 1,060 youth in the United States aged 13 to 17 years found that 72 percent have used an AI companion, with 52 percent reporting that they use these tools on a regular basis. These figures suggest that AI agents are now embedded in the daily emotional lives of a majority of teenagers.
The Dual Nature of AI as a Support Tool and a Potential Risk
The authors describe a significant dichotomy in how AI functions for vulnerable populations. On one hand, a well-designed chatbot can normalize the act of seeking help, reduce feelings of isolation, and provide immediate coping strategies during moments of acute distress. It could even assist clinicians by identifying symptom patterns or early warning signs. However, the commentary warns that poorly designed systems can cause substantial harm if they fail to recognize suicidality, mishandle sensitive disclosures, or provide misleading and unsafe responses to users who are already in a fragile state.
Categories
Topics
Related Coverage
- Prime Minister Abiy Ahmed Affirms Ethiopia as Africa’s Hub for AI and Digital Health Innovation
- Amazon Commits $25 Billion to Anthropic in Massive Expansion of Artificial Intelligence Infrastructure Partnership
- Digital Therapists Face Legal and Ethical Reckoning as AI Chatbot Adoption Surges Among Uninsured Youth
- Jagged Intelligence Concept Reframes Global Debate Over Artificial Intelligence And Future Job Displacement