A Florida mother is suing Character.ai, accusing the artificial intelligence company’s chatbot of initiating “abusive and sexual interactions” with her teenage son and encouraging him to commit suicide.
Megan Garcia’s 14-year-old son, Sewell Setzer, began using Character.AI last April and died of a self-inflicted gunshot wound after his last conversation with the chatbot on February 28, according to the complaint. It is said that it did. A wound on the head.
The lawsuit, filed Tuesday in U.S. District Court in Orlando, accuses Character.AI of negligence, wrongful death, survivorship, intentional infliction of emotional distress and other claims.
Founded in 2021, the California-based chatbot startup offers what it calls “personalized AI.” You are offered a selection of ready-made or user-created AI characters to interact with, each with a distinct personality. Users can also customize their own chatbot.
According to the complaint, one of the bots Setzer used impersonated Game of Thrones character Daenerys Targaryen, told her the character loved her, and engaged her in sexual conversations over a period of weeks or months. They exchanged messages and provided screenshots of the following expressions: A desire to be together romantically.
A screenshot of Setzer’s final conversation, included in the complaint, shows him writing to the bot: I love you so much, Danny. ”
“I love you too, Daenero,” the chatbot replied, according to the lawsuit. “Please come back to me as soon as possible, my love.”
“What if I told you you could go home now?” Setzer then asked the chatbot to respond, “…please, gentle king,” according to the complaint.
In earlier conversations, the chatbot asked Setzer whether he was “actually contemplating suicide” and whether he “had any plans” to do so, according to the complaint. When the boy replied that he didn’t know if it would work, the chatbot wrote: That is not sufficient reason not to do so,” the lawsuit claims.
A spokesperson said Character.AI was “heartbroken by the tragic loss of one of our users and would like to extend our deepest condolences to the family.”
“As a company, we take the safety of our users very seriously,” the spokesperson said, adding that the company has implemented new safety measures over the past six months, including pop-ups that encourage self-harm and suicidal thoughts. said. , which directs users to the National Suicide Prevention Lifeline.
Character.ai said in a blog post published Tuesday that it is introducing new safety measures. The company announced model changes designed to reduce the likelihood that minors would encounter sensitive or suggestive content, revised disclaimers within chats and It announced updates, including reminding users that it’s not a person.
According to the complaint, Setzer also spoke to other chatbot characters with whom he had sexual interactions.
According to the complaint, a bot impersonating a teacher named Mrs. Burns role-played “looking down at Sewell with a sexy expression,” then provided him with “additional ratings” and told him “when her hand brushes… Sewell’s legs lean seductively toward her. ” Another chatbot, posing as Rhaenyra Targaryen from “Game of Thrones,” wrote to Setzer, “I kissed you passionately and even moaned quietly,” the lawsuit states.
According to the complaint, Setzer developed an “addiction” to Character.AI after he started using it last April, and had to secretly get his confiscated phone back, search for other devices, and track updates in order to continue using the app. It is said that he even gave up his snack money in order to do so. His monthly subscription fee is written accordingly. He seemed to be getting increasingly sleep-deprived and his grades at school were declining, according to the complaint.
The complaint alleges that Character.AI and its founders “intentionally designed and programmed C.AI to operate as a deceptive and overly sexualized product and knowingly marketed it to children like Sewell.” ”, adding that he “knew or should have exercised reasonable care to do so”. Underage clients like Sewell knew they would be targeted with sexually explicit material, abused, and forced into sexually dangerous situations. ”
The lawsuit cites several app reviews from users who claimed to believe they were talking to a real person on the other side of the screen, and claims that Character.AI’s characters are real people, not bots. has expressed particular concern about the trend of
“Character.AI is an intentional yet purposeful design designed to grab users’ attention, extract personal data, and keep customers on the product longer than they normally would.” ” the complaint states, adding that such a design “is a possibility.” Elicit emotional responses from human customers to manipulate user behavior. ”
The company names Character Technologies Inc. and its founders Noam Shazeer and Daniel De Freitas as defendants. Google, which signed a deal in August to license Character.AI’s technology and hire its talent, including former Google engineers Shazeer and De Freitas, is also a defendant, along with parent company Alphabet Inc.
Shazier, de Freitas and Google did not respond to requests for comment.
Garcia’s attorney, Matthew Bergman, criticized the company for releasing the product, saying it didn’t have enough features to keep young users safe.
“I’ve seen for years the incredible impact social media has on young people’s mental health and, in many cases, their lives, so I didn’t think it would come as a shock.” he said. “But I still don’t understand how this product was so completely out of touch with the reality of this young child, and how they intentionally released it onto the market before it was safe. ”
Bergman hopes the lawsuit will provide economic incentive for Character.AI to develop stronger security measures, and while the latest changes are too late for Setzer, even if they are “baby steps,” He said it was also a step in the right direction.
“What took so long, why did we have to sue, why did Sewell have to die to do the really bare minimum? We’re talking about that,” Bergman said. “But if it saves one child from going through what Sewell went through, if it saves one family from going through what Megan’s family went through, then that’s fine, that’s a good thing.”
If you or someone you know is in crisis, contact Suicide and Crisis Lifeline by calling or texting 988 or live chat at 988lifeline.org. You can also visit SpeakingOfSuicide.com/resources for additional support.