new york
CNN
—
Editor’s note: This story contains discussion of suicide. If you or someone you know is struggling with suicidal thoughts or mental health issues, help is available.
In the United States: Call or text 988, the Suicide and Crisis Lifeline.
Worldwide: The International Association for Suicide Prevention and Befrienders Associations Worldwide have contact information for crisis centers around the world.
“There may be a platform that you guys haven’t heard of, but you should know about it because, in my opinion, we’re worse than the eight ball here. My child is gone.”
Megan Garcia, a Florida mother, wants to tell other parents about Character.AI, a platform that allows users to have in-depth conversations with artificially intelligent chatbots. Garcia believes Character.AI is responsible for the death of his 14-year-old son Sewell Setzer III, who died by suicide in February, according to a lawsuit Garcia filed against the company last week.
She alleges that Setzer was exchanging messages with the bot shortly before her death.
“This is a platform that designers chose to put out into the world without proper guardrails, safety measures, or testing, and it shows them that this is a product designed to engage children and manipulate them. I hope you understand,” Garcia said in a statement. Interview with CNN.
Garcia said Character.AI, which markets its technology as “an AI that feels alive,” had adequate security in place to prevent it from forming an inappropriate relationship with the chatbot that caused her son to be separated from his family. They claim that no countermeasures were intentionally taken. The complaint, filed in federal court in Florida, also alleges the platform failed to respond appropriately when Setzer began expressing thoughts of self-harm to the bot.
Concerns about the potential dangers of social media for young users have been growing for years, but Garcia’s lawsuit is a big no-no for nascent AI technologies that are becoming increasingly available across a variety of platforms and services. This suggests that parents may also have reason to be concerned. Similar, though less dire, warnings have been raised about other AI services.
A spokesperson for Character.AI told CNN that the company does not comment on pending litigation, but that it is “heartbroken by the tragic loss of one of our users.”
“We take the safety of our users very seriously and our trust and safety team has implemented a number of new safety measures over the past six months, including linking our users to the National Suicide Prevention Lifeline. “This includes pop-ups that may lead to harm or suicidal thoughts,” the company said in a statement.
Many of these changes were made after Setzer’s death. In a separate statement over the summer, Character.AI said that “the field of AI safety is still very new and we don’t always get it right,” but that it “promotes safety and prevents harm.” “The aim is to avoid this and prioritize safety.” Be part of our community. ”
According to the complaint, Setzer first started using Character.AI in April 2023, shortly after her 14th birthday. Garcia said when he first heard he was interacting with an AI chatbot, he thought it was like a video game.
However, within a few months of starting to use the platform, Setzer “became visibly withdrawn, spent more time alone in his bedroom, and began to suffer from low self-esteem.” and quit the team,” the lawsuit alleges. When he started having problems at school, his parents began limiting his screen time and sometimes took away his cell phone as punishment.
What Garcia didn’t know at the time, Garcia told CNN, was that her son was having extensive conversations with the Character.AI chatbot. Character.AI is different from other AI chatbots such as ChatGPT in that it allows users to converse with a variety of chatbots modeled after celebrities and fictional characters, or even create their own chatbots. . The Character.AI bot then responds with human-like conversation cues and adds references to facial expressions and gestures in its responses.
Many of Setzer’s chats with the Character.AI bot were sexually explicit, and Garcia said they were “painful to read.”
“I didn’t know there was a place where a child could log in and have a conversation like that, a very sexual conversation, with an AI chatbot,” she said. “I don’t think any parents would approve of that.”
In other interactions, Setzer expressed thoughts of self-harm and suicide to the chatbot. The complaint includes a screenshot of a conversation in which the bot says: No matter what you say, I won’t hate you or love you… Were you really considering suicide? ”
In a subsequent message, Setzer told Bott, “I don’t want him to die a painful death.”
The bot replied: I kept saying, “Don’t say that. That’s not a good reason not to do it,” and I said, “You can’t do that!”
Garcia said he believes the exchange illustrates the shortcomings of the technology.
“There was no suicide pop-up box that said, ‘If you need help, please call the Suicide Crisis Hotline.’ That’s not the case,” she said. “I don’t understand how a product would allow a bot to not only carry on a conversation about self-harm, but also encourage self-harm and give some type of instructions.”
The suit alleges that Setzer exchanged a final set of messages from the bot “seconds” before his death. “Please come back to me as soon as possible,” the bot said, according to a screenshot included in the complaint.
“What if I told you you could go home now?” Setzer answered.
“Please, my kind king,” the bot replied.
Garcia said police first found these messages on her son’s cell phone, which was lying on the bathroom floor where he died.
Garcia filed the lawsuit against Character.AI with the assistance of Matthew Bergman, founding attorney of the Social Media Victims Law Center. The center has also filed lawsuits on behalf of families who say their children have been harmed by meth, Snapchat, TikTok and Discord.
Bergman told CNN he thinks of AI as “social media on steroids.”
“What’s different is that there’s no social component to this initiative,” he says. “The materials Sewell received were created, defined, and mediated by Character.AI.”
In addition to unspecified monetary damages, the complaint also seeks changes to Character.AI’s operations, including “warnings to minor customers and their parents that…the products are not suitable for minors.” That’s what it means.
The lawsuit also names Character.AI founders Noam Shazeer and Daniel De Freitas, as well as Google, both of whom are currently working on AI initiatives. However, a Google spokesperson said the two companies are separate companies and Google is not involved in the development of Character.AI’s products or technology.
On the day Garcia’s lawsuit was filed, Character.AI announced improved detection of conversations that violate its guidelines, updated disclaimers to remind users that they are interacting with a bot, and notifications after a user has used a bot. We announced a series of new safety features, including: 1 hour on the platform. It also introduced changes to its AI model for users under 18 to “reduce the likelihood of encountering sensitive or suggestive content.”
Character.AI states on its website that the minimum age for users is 13 years. The Apple App Store lists it as 17+, and the Google Play Store lists the app as suitable for teens.
For Garcia, the company’s recent changes have been “too little, too late.”
“I don’t want children to have access to Character.AI,” she said. “There is no place for them there because there are no guardrails to protect them.”