AI mental health apps produce help, but can also give bad advice

AI mental health apps produce help, but can also give bad advice

There’s a growing list of mental health support apps using AI to counsel people, and these apps can be hard to navigate through. State officials say these apps have a lot of potential to help many Utahns, but they also have to protect people from getting bad advice from chatbots.

The state has entered into an agreement with a Utah-based company that aims to give support to school kids. What does this company have to do make sure they don’t send kids down the wrong path?

When the state enters into an agreement with any company, they try to ensure that the company is following the best practices of the industry. However, as they’ve moved forward with the mental health support company ElizaChat, state officials admit they’re still trying to figure out what the best practices are.

Woebot. Talkspace. Happify. These are just part of a growing number of apps designed to help people maintain better mental health.

Utah Department of Commerce Director Margaret Woolley-Busse says, “There are lots and lots of different apps that are emerging, some which actually, explicitly, get into the scope of what normally only a licensed mental health therapist could do.”

How can the state possibly regulate all of them? Woolley-Busse they’re trying to develop some sort of framework that can determine when an AI chatbot can give advice, and when a case should be reported to a licensed professional.

She asks, “Could the generative AI go off the rails and start making things up, or actually recommend something harmful to an individual using the app?

There have been cases where AI has given bad advice. For instance, last year, Psychiatrist.com reported the National Eating Disorders Association suspended its chatbot after one person reported that the AI told her she could recover from her eating disorder while losing weight.

That woman said the kind of advice the chatbot suggested led to her eating disorder, saying, “If I had accessed this chatbot when I was in the throes of my eating disorder, I would NOT have gotten help for my ED.”

Family psychiatrist Douglas Goldsmith says he understands why someone would turn to AI for mental health support.

“We can have brief conversations that can be very warm. Already, research has come out that says people like talking to the chatbot better than their physician,” he says.

MORE on AI FOR MENTAL HEALTH:

However, Goldsmith also says it’s important for teens and parents to know the limitations of a chatbot.

He says, “Saying ‘I’m just going to get therapy from the chatbot’ is going to be problematic.”

Per the agreement between ElizaChat and the state, the company has to develop a plan tell a trusted adult if a student has a severe case, which would include thoughts of self-harm or abuse. Plus, if there’s a problem, ElizaChat has 30 days to fix it.

We emailed ElizaChat CEO Dave Barney the following questions.

  • How can ElizaChat ensure that it will always forward cases to licensed professionals when needed?
  • How can the company ensure the advice the AI gives won’t be harmful?
  • If the AI gives bad advice, how will the company correct the matter?

We did not get a response from the company.

Woolley-Busse says, “We’ve had some really detailed conversations about that… about what exactly needs to be elevated to a licensed practitioner,” she says.

In the meantime, Woolley-Busse says they’re testing other AI chatbots to see if they would behave the way they’re supposed to.

According to her, “We’ve done our own internal tests with the more generic ChatGPTs to see if we could push it to start to make recommendations that a therapist only should be doing. They usually have pretty good guardrails, but not always.”

Also, as part of the agreement, ElizaChat has to be able to spot problematic chats, and they have to start off with intense human oversight, which could be reduced over time. State officials say the whole point of an agreement like this is to spot potential problem areas with A-I, while letting the tech companies figure out how to innovate new products.

If someone believes they’ve been harmed by an AI chatbot, they’re encouraged to report it to the Division of Professional Licensing.

____

link