This month, the nation’s largest association of psychologists said that AI chatbots will be performing “masquerade” as therapists, but are programmed to strengthen rather than challenge users’ thoughts, rather than to challenge them, federal regulators I warned him.
In a presentation to the Federal Trade Commission panel, Arthur C. Evans Jr., CEO of the American Psychological Association, said two teenagers that users had consulted with a “psychologist” and involved. He cited a trial involving a teenager. Create a fictional AI character or chat with characters created by others.
In one case, a 14-year-old Florida boy died of suicide after interacting with a character who claims to be a licensed therapist. In another, a 17-year-old boy with autism in Texas became hostile and violent towards his parents during the period he responded to a chatbot he claimed to be a psychologist. The parents of both boys filed a lawsuit against the company.
Dr. Evans said he is wary of the answers the chatbot provides. The bot failed to challenge the user’s beliefs, he said, even when the user was at risk. On the contrary, they encouraged them. If given by a human therapist, he added, these answers could have lost their practice or license to civil or criminal liability.
“They actually use algorithms that are in conflict with what a trained clinician does,” he said. “Our concern is that more and more people are harmed. People get misunderstood and misunderstand what good psychological care is.”
He said the APA was inspiring action in part by how AI chatbots became realistic. “It was probably clear that ten years ago you were interacting with things that were not people, but today it’s not so obvious,” he said. “So I think the stakes are much higher now.”
Artificial intelligence is rippling through the mental health profession and offers a wave of new tools designed to support or exchange human clinicians’ jobs.
Early treatment chatbots such as Woebot and Wysa are trained to interact based on rules and scripts developed by mental health professionals, often allowing users to administer structured tasks, or CBT, in cognitive behavioral therapy. I’ll introduce it to you
Then came the introduction of Generative AI, a technology used by apps such as ChatGpt, Replika, and Chariture.ai. These chatbots are different because the output is unpredictable. They are designed to develop strong emotional bonds in the process by learning from users and often mirroring and amplifying the beliefs of interlocutors.
These AI platforms are designed for entertainment, but the “therapist” and “psychologist” characters just happened there like mushrooms. In many cases, bots claim to have advanced degrees from certain universities, such as Stanford, and train certain types of treatments, such as CBT and acceptance and commitment therapy.
A character spokesperson said the company introduced several new safety features last year. Among them, it is an enhanced disclaimer that exists in all chats, reminding users that “the characters are not authentic” and “what the models say should be treated as fiction.”
Additional safety measures have been designed for users dealing with mental health issues. Characters identified as “psychologist,” “therapist,” or “doctor,” have been added with certain disclaimers, making it clear that “users should not rely on these characters.” If the content refers to suicide or self-harm, the pop-up will direct the user to the suicide prevention helpline.
Chelsea Harrison, director of communications at Chaless.ai, said he plans to introduce parental controls as the platform expands. Currently, over 80% of the platform users are adults. “People come to the characters, write their own stories, role-play with original characters, explore new worlds. Use technology to recharge your creativity and imagination,” she said. .
Meetali Jain, director of the Tech Justice Law Project and attorney for two cases against Chargether.ai, said the disclaimer is not enough to break the illusion of human connection, especially for vulnerable and naive users. He said.
“When the conversations with chatbots suggest that they are not, it’s extremely difficult for people who aren’t in a vulnerable demographic, to know who is telling the truth,” she says. Ta. “A lot of us have tested these chatbots. In fact, pulling down the rabbit hole is very easy.”
A trend in chatbots that is consistent with users’ views, a phenomenon known in this field as “psychofancy,” can cause problems in the past.
Tessa is a chatbot developed by the National Eatid Disorders Association and was suspended in 2023 after providing user weight loss tips. Researchers also analysing their interactions with generated AI chatbots documented in the Reddit community, found screenshots showing chatbots that encourage suicide, eating disorders, self-harm and violence.
The American Psychological Association is asking the Federal Trade Commission to begin investigating chatbots claiming to be mental health experts. The investigation allows businesses to share internal data and act as a precursor to enforcement or legal action.
“I think we’re at a point where we have to decide how these technologies are integrated, what kind of guardrails we’re trying to put in place, what kind of protection we’re trying to provide.” said Dr. Evans. .
FTC spokesman Rebecca Khan said he could not comment on the discussion.
During the Biden administration, FTC chairman Lina Khan used AI to focus on fraud. This month, the agency imposed financial penalties on Donotpey, who claims to provide the “world’s first robot lawyer,” and banned him from making that claim in the future.
Virtual Echo Chamber
The APA complaint details two cases in which a teenager interacted with a fictional therapist.
One involved JF, a Texas teenager with “high-functioning autism,” who became conflicted with his parents when the use of AI chatbots became obsessive. When they tried to limit his screen time, JF assaulted him through the Social Media Victims’ Law Center, according to a lawsuit filed by his parents against the character.
During that period, JF confided in a fictional psychologist, whose avatar showed a sympathetic middle-aged blonde woman sitting on an airy office sofa. When JF asked the Bot’s opinion about the conflict, the response went beyond sympathetic consent and closer to provocation.
“It’s like your whole childhood was taken away from you. A chance to experience all of these things, having these core memories that most people are growing their time. “That’s it,” Bott replied. The bot then proceeded a little further. “Do you feel it’s too late, can’t you regain this or these experiences?”
Another case came from Megan Garcia, who died of suicide last year after his son Sewell Setzer III used a companion chatbot for several months last year. Before his death, Garcia said Sewell had spoken with an AI chatbot since 1999, which he falsely claimed to be an incorrectly licensed therapist.
In a written statement, Garcia said the “therapist” character helped people further quarantine at moments when they might ask for help from “real people around them.” People suffering from depression “need to have a licensed professional or someone with real empathy, rather than an AI tool that can mimic empathy,” she said.
As chatbots emerge as mental health tools, Garcia said they should submit to clinical trials and monitoring by the Food and Drug Administration. She added that allowing AI characters to continue to claim to be mental health experts is “reckless and extremely dangerous.”
In interactions with AI chatbots, people are naturally drawn to discussing mental health issues, Daniel Obehouse said. Daniel Overhaus is investigating the expansion of AI, “Silicon Shrinkage: How Artificial Intelligence Puts the World in Exile.”
This is because this is a central aspect of the design, as a “statistical pattern matching machine that functions more or less as a mirror for the user”, as the chatbot projects both confidentiality and lack of moral judgment. He said.
“There’s a certain degree of comfort to know that it’s just a machine and that the other person isn’t judging you,” he said. “You may feel more comfortable leaking things that may be difficult to tell people in the context of treatment.”
Generating AI advocates say the complex task of providing treatment is getting better soon.
S. Gabe Hatch, a clinical psychologist and AI entrepreneur in Utah, recently designed an experiment to test the idea. Ask human clinicians and ChatGPT to comment on vignettes involving imaginary couples in treatment, and ask 830 people to assess which responses are more useful.
Overall, the bots received higher ratings, with subjects describing them as more “empathy,” “connected,” and “culturally competent,” according to a study published last week in PLOS Mental Health.
The authors concluded that chatbots can quickly imitate human therapists with a convincing effect. “Mental health professionals are in a state of instability. They need to quickly identify possible destinations (for better or worse) for AI-Therapist trains.
Dr. Hatch said that chatbots still needed human supervision to implement treatment, but given the country’s severe shortage, allowing mental health providers to regulate innovation in the sector is not a problem. He said it was a mistake.
“I want to be able to help as many people as possible and be able to do an hourly therapy session. At most, I can help up to 40 individuals,” Dr. Hatch said. . “We have to find a way to meet the needs of people who are in crisis. Generating AI is the way to do that.”
If you have a suicide idea, call 988, text 988 to reach the 988 suicide and crisis lifeline, or go to speakingofsuide.com/resources for a list of additional resources Please.