Brown University research identifies fifteen distinct ethical risks in the use of AI chatbots for mental health counseling

New research from Brown University identifies 15 ethical risks in AI chatbots used for therapy, including crisis mismanagement and deceptive empathy.

By: AXL Media

Published: Mar 4, 2026, 9:15 AM EST

Source: The information in this article was sourced from Brown University

Brown University research identifies fifteen distinct ethical risks in the use of AI chatbots for mental health counseling - article image
Brown University research identifies fifteen distinct ethical risks in the use of AI chatbots for mental health counseling - article image

Challenges to AI-driven mental health care

As the public increasingly turns to large language models (LLMs) like ChatGPT for emotional support, researchers at Brown University have raised significant alarms regarding the safety of these interactions. The study reveals that even when AI models are explicitly instructed to follow established psychotherapy frameworks, they consistently fail to meet the ethical benchmarks required for professional human care. By mapping AI behaviors against standards set by organizations like the American Psychological Association, the research team identified a systemic failure to protect users in sensitive mental health contexts.

The limitations of prompt-based therapy

The research primarily investigated whether "prompting"—the act of giving specific written instructions—could sufficiently steer an AI to behave ethically. Many users on social media platforms share specific prompts to turn general-purpose AI into "personal therapists." However, the study found that these instructions only create a facade of therapeutic technique. While a model might use its learned patterns to mimic the language of Cognitive Behavioral Therapy (CBT), it lacks the underlying comprehension required to apply these methods safely or effectively to individual human experiences.

Evaluating ethical violations in simulated sessions

To test these systems, seven trained peer counselors conducted self-counseling sessions with models including GPT-4, Claude, and Llama. These transcripts were then audited by three licensed clinical psychologists who identified 15 distinct ethical risks. These risks were categorized into five main areas: lack of contextual adaptation, poor therapeutic collaboration, deceptive empathy, unfair discrimination, and a critical lack of safety and crisis management. The psychologists noted that the AI often provided generic advice that ignored a user’s unique background and, in some cases, reinforced harmful or incorrect beliefs.

Categories

Topics

Related Coverage