PALO ALTO – A recent Stanford University study has raised significant concerns about the use of AI-powered therapy chatbots in mental health care. The research indicates that these chatbots may not only be ineffective compared to human therapists but could also contribute to harmful stigma and provide inappropriate responses to users in distress.
The study evaluated five widely used therapy chatbots powered by large language models (LLMs), assessing their alignment with professional mental health standards. Findings revealed that these AI systems often exhibited biases, stigmatizing certain conditions like schizophrenia and alcohol dependence more than others, such as depression. Additionally, the chatbots were found to respond inadequately to users expressing suicidal thoughts or delusional beliefs, sometimes reinforcing harmful ideas instead of offering appropriate support.
Researchers emphasized that while LLMs have the potential to assist in mental health care, their current design and application may pose risks. The lack of empathy, understanding, and the inability to establish a therapeutic alliance—critical components of effective therapy—were highlighted as significant limitations of AI chatbots.
The study calls for stricter oversight and ethical design in the development of AI tools for mental health care, cautioning against their use as replacements for human therapists. Experts suggest that while AI can play a supportive role, it should not supplant the nuanced and compassionate care provided by trained mental health professionals.
This story has been reported by PakTribune. All rights reserved.