AI mental health trial equals humans

Researchers at UW Medicine have discovered that algorithms are equally effective as trained human evaluators in identifying red flag language in text messages sent by individuals with severe mental illness. This introduces a promising avenue of research that could aid psychiatry training and care shortages.
The findings were published in the journal Psychiatric Services at the end of September.

These remote psychiatric interactions may lack the emotional reference points that therapists utilise to navigate in-person conversations with patients.

The research team from the Department of Psychiatry and Behavioral Sciences employed natural language processing for the first time to help detect and identify text messages containing “cognitive distortions” that might be missed by an untrained or overburdened physician. Ultimately, the research could also help more patients find care.

Justin Tauscher, the paper’s lead author and an acting assistant professor at the University of Washington School of Medicine, explained that when we interact with individuals in person, there are numerous settings. We have visual signals and audio clues, which do not appear in text messages. These are items on which we’ve been trained to rely. The expectation is that technology will offer physicians with a supplement to the information they need to make clinical choices.

The study analysed tens of thousands of spontaneous text messages sent between 39 individuals with severe mental illness and a history of hospitalisation and their mental health practitioners. Human assessors rated the texts for a variety of cognitive distortions, as they would in a typical patient care setting. The evaluators search for oblique or explicit language that shows the patient is overgeneralizing, catastrophizing, or leaping to conclusions, all of which might be indicators of a problem.

The researchers also taught computers to perform the identical work of rating texts, and discovered that both humans and AI performed comparably in the majority of areas examined.

Tauscher, who came to research after a decade in a clinical setting, believes that having systems that can help support clinical decision-making is highly relevant and potentially impactful for those in the field who sometimes lack access to training, supervision, or are just tired, overworked, and burned out and have trouble staying present in all their interactions.

Assisting clinicians would be an obvious advantage, but researchers see future uses that work in tandem with a wearable fitness band or a mobile phone monitoring system. Dror Ben-Zeev, head of the UW Behavioral Research in Device and Engineering Center and co-author of the paper, stated that the technology could one day provide real-time feedback that would alert a therapist to impending difficulties.

In the same way that you receive a blood-oxygen level, a heart rate, and other inputs, said Ben-Zeev, we may receive a message indicating that the patient is overreacting and catastrophizing. We foresee a future in which we will be able to simply attract attention to a thought pattern. People will have feedback loops with their technologies that provide them with self-awareness.


Journal Reference

Justin S. Tauscher et al, Automated Detection of Cognitive Distortions in Text Exchanges Between Clinicians and People With Serious Mental Illness, Psychiatric Services (2022). DOI: 10.1176/

Leave a Reply

error: Content is protected !!
Open chat
WhatsApp Now