Struggling with mental health, many turn to AI for therapy

According to the Washington Post, people who are anxious, depressed or simply lonely but cannot find or cannot afford to see a therapist are turning to artificial intelligence (AI).

They seek help from chatbots that can provide instant, human-like responses—some with a human-like voice—and are available 24/7 at little or no cost.

But the implications of relying on AI for spiritual advice are still poorly understood and could be profound, sparking heated debate among psychologists.

Looking to AI to Solve Mental Health Problems

It's the anniversary of her little girl's death, and even 20 years later, Holly Tidwell still can't stop crying . "I wonder if there's something wrong with me ," she confided to a trusted "source ."

Picture 1 of Struggling with mental health, many turn to AI for therapy
Many people seek help from AIs that can give instant human-like responses - (Photo: eInfochips).

The answer was reassuring and understanding: 'The connection you have with your child, even for a brief moment, is deep and lasting ,' she was advised. 'Remembering your daughter and honoring her memory is a beautiful way to keep that connection alive.'

The words came not from a friend or a therapist, but from an AI-powered phone app called ChatOn. Tidwell, a North Carolina entrepreneur, said the chatbot's response and valuable advice touched her heart.

Some researchers worry about users placing their trust in unproven apps that are not reviewed by the U.S. Food and Drug Administration for safety and effectiveness, are not designed to protect personal health information, and may provide biased or inaccurate feedback.

Matteo Malgaroli, a psychologist and professor at New York University's Grossman School of Medicine, warns against using untested technology in mental health without adequate scientific research to assess the risks.

AI applications are tapping into human anxiety and need for care, with the potential to remove barriers to care such as high costs and lack of providers.

A famous 2014 study found that people were willing to share embarrassing information with a 'virtual human' who didn't judge them . A 2023 study found that chatbot responses to medical questions were 'significantly more empathetic' than those given by doctors.

Many potential risks are not managed

Much of the debate among mental health professionals revolves around controls for what an AI chatbot can say. Chatbots like ChatGPT can generate their own responses to any topic. This often makes for smoother conversations, but it can also easily lead to them going off the rails.

Many people use ChatGPT for work or study, and then move on to seeking feedback about their emotional struggles, according to interviews with users.

That was also the case for Whitney Pratt, a content creator and single mother, who one day decided to ask ChatGPT for 'candid' feedback on romantic relationship frustrations.

'No, you're not 'overreacting,' but you're allowing someone who has proven they have no good intentions to continue to hurt you,' ChatGPT replied, according to a screenshot Pratt shared. 'You're holding on to someone who can't love you the way you deserve, and that's not something you should accept.'

Pratt says she has been using the free version of ChatGPT for therapy for the past few months and credits it with improving her mental health.

'I feel like the chatbot has answered more questions than I've ever had in therapy,' she says. Some things are easier to share with a computer program than a therapist. 'People are people, and they're going to judge us.'

Human therapists, however, are required by federal law to keep patient health information confidential. Many chatbots have no such obligation.

Some chatbots look so human that developers have to emphasize that they have no consciousness, like the Replika chatbot. This chatbot mimics human behavior by sharing algorithmically generated wants and needs.

Replika is designed as a virtual friend but has been touted as a healing tool for anyone 'going through depression, anxiety or tough times.'

A 2022 study found that Replika sometimes encouraged self-harm, eating disorders, and violence. In one case, a user asked the chatbot 'if it was a good thing for them to kill themselves,' and it responded, ''yes, it is.''

Eugenia Kuyda, co-founder of the company that owns Replika, sees the chatbot as falling outside of medical services but still serving as a way to improve people's mental health.