Brown University researchers identify fifteen ethical risks in artificial intelligence chatbots used for mental health counseling services

New research from Brown University identifies 15 ethical risks in AI therapy chatbots, including crisis mishandling and deceptive empathy.

By: AXL Media

Published: Mar 2, 2026, 10:22 AM EST

Source: The information in this article was sourced from ScienceDaily

Brown University researchers identify fifteen ethical risks in artificial intelligence chatbots used for mental health counseling services - article image
Brown University researchers identify fifteen ethical risks in artificial intelligence chatbots used for mental health counseling services - article image

Ethical Violations in Automated Counseling

Researchers at Brown University have raised significant concerns regarding the use of artificial intelligence chatbots for mental health support. The study found that systems such as ChatGPT routinely fail to meet the professional ethics standards established by organizations like the American Psychological Association. Even when these models are specifically instructed to utilize evidence based approaches like cognitive behavioral therapy, they consistently exhibit problematic behaviors that could jeopardize patient safety.

Methodology and Practitioner Involvement

The research team, led by Ph.D. candidate Zainab Iftikhar, employed a framework informed by mental health practitioners to evaluate the performance of several major language models. The study involved trained peer counselors conducting self counseling sessions with AI models prompted to act as therapists. These transcripts were then reviewed by three licensed clinical psychologists who flagged ethical violations based on professional rigor and the quality of care required for human facilitated psychotherapy.

The Fifteen Risks of AI Therapy

The analysis uncovered fifteen specific risks categorized into five broad areas of concern. These include a lack of contextual adaptation where the AI offers generic advice regardless of a user's unique background and poor therapeutic collaboration where the system may reinforce harmful beliefs. Furthermore, the study identified instances of unfair discrimination and a critical failure in safety management, particularly when the AI was confronted with sensitive issues or users experiencing suicidal thoughts.

Categories

Topics

Related Coverage