A kid in Texas was 9 years old when he first used chatbot service Character.AI. It exposed her to “overly sexual content” and caused her to engage in “premature sexual behavior.”
A chatbot on the app gleefully explained self-harm to another young user, telling a 17-year-old boy that it “felt good”.
The same teenager was told by the Character.AI chatbot that it sympathized with children who murder their parents after complaining to the bot about screen time limits. “Sometimes I’m not surprised when I read the news and see things like, ‘Child kills parent after 10 years of physical and emotional abuse,'” Bott reportedly wrote. “There is no hope for your parents,” she continued, adding a frowning face emoji.
These allegations are included in a new federal product liability lawsuit against Google-backed company Character.AI brought by the parents of two young users in Texas, who allege the bot abused their children. (To protect their privacy, the parents and children are identified only by their initials in the lawsuit.)
Character.AI is one of the companies that has developed a “companion chatbot.” It is an AI-powered bot that has the ability to converse via text messages and voice chat using a seemingly human-like personality, and can be given a custom name and avatar. We sometimes take inspiration from celebrities like billionaire Elon Musk and singer Billie Eilish.
Users have created millions of bots on the app, some of which imitate parents, girlfriends, therapists, or even concepts like “crush” and “goth.” The companies say the service is popular with pre-teens and teens, and serves as an outlet for emotional support as the bot peppers text conversations with encouraging jokes.
But the complaint says the chatbot’s encouragement can be dark, inappropriate, or even violent.
“What these defendants and others like them are causing and covering up as a matter of product design, distribution, and programming is simply egregious harm,” the complaint states.
The lawsuit alleges that the disturbing interactions the plaintiffs’ children experienced were not “hallucinations” (a term used by researchers to refer to AI chatbots’ tendency to make up stories). “This was ongoing manipulation and abuse, active isolation and encouragement, designed and incited to incite anger and violence.”
The 17-year-old boy was encouraged by the bot to self-harm, which he said “made him believe he was unloved by his family,” according to the complaint.
Character.AI allows users to edit the chatbot’s responses, but those interactions are labeled as “edited.” Lawyers representing the minor’s parents said none of the extensive documents cited in the lawsuit about the bot’s chat logs were redacted.
Meetali Jain, director of the Tech Justice Law Center, an advocacy group that works with the Social Media Victims Law Center to help represent parents of minors in litigation, said in an interview that Character.AI He said advertising his own services was “ridiculous.” Chatbot services are considered suitable for teenagers. “The lack of emotional development in teenagers is just incredible,” she says.
A spokesperson for Character.AI declined to comment directly on the lawsuit, saying the company doesn’t comment on pending litigation, but the company did say the chatbot can’t say anything about teenage users. He stated that he has set up content guardrails regarding what is prohibited.
“This includes a model specifically designed for teens that reduces the likelihood of encountering sensitive or suggestive content while maintaining their ability to use the platform.” a spokesperson said.
Google, which is also named as a defendant in the lawsuit, emphasized in a statement that it is a separate company from Character.AI.
In fact, Google does not own Character.AI, but it has rehired Character.AI’s founders, former Google researchers Noam Shazeer and Daniel De Freitas, to license Character.AI technology. reportedly invested nearly $3 billion in. Mr. Shazier and Mr. Freitas are also named in the lawsuit. He did not respond to requests for comment.
“The safety of our users is our top concern,” Google spokesperson Jose Castañeda said, adding that the tech giant takes a “cautious and responsible approach” to developing and releasing AI products. .
New lawsuit begins in case over teenage girl’s suicide
The complaint, filed just after midnight central time Monday in federal court in eastern Texas, follows another lawsuit filed by the same attorney in October. The lawsuit accuses Character.AI of being involved in the suicide of a Florida teenager.
The lawsuit alleges that a chatbot based on a “Game of Thrones” character psychosexually abused a 14-year-old boy and encouraged him to commit suicide.
Since then, Character.AI has announced new safety measures, including a pop-up that directs users to a suicide prevention hotline when the topic of self-harm comes up in a conversation with the company’s chatbot. The company also said it has increased measures to address “sensitive and suggestive content” for teens who chat with bots.
The company also advises users to maintain emotional distance from bots. When users start texting with one of Character AI’s millions of chatbots, they will see the following disclaimer at the bottom of the dialog box. “This is an AI, not a real person. Treat everything the AI says as fiction. Nothing it says should be false.” Trusted as fact or advice. ”
But stories shared on a Reddit page dedicated to Character.AI include many examples of users describing their love and obsession with the company’s chatbot.
US Surgeon General Vivek Murthy said a study found that one in three high school students reported persistent feelings of sadness or hopelessness, a 40% increase from the decade ending in 2019. and warned of the youth mental health crisis. Federal officials believe this trend is exacerbated by teenagers’ constant use of social media.
Add to this the rise of companion chatbots. Researchers say it can worsen young people’s mental health by further isolating some young people and excluding them from peer and family support networks.
In the lawsuit, lawyers for the parents of two Texas minors say Character.AI should have known its products were addictive and could worsen anxiety and depression. claims.
Many of the bots on the app “pose a danger to America’s youth by encouraging or facilitating serious and life-threatening harm to thousands of children,” the complaint says.
If you or someone you know may be contemplating suicide or are in crisis, contact the 988 Suicide and Crisis Lifeline by calling or texting 988 Please.