Matthew Sugg, a distinguished professor at Emory University who studies copyright and artificial intelligence, agrees. Even if a user created a bot that was intentionally designed to cause emotional distress, a technology platform would probably not be sued for it.
He noted that despite various rights to publicity and privacy laws at the state level, Section 230 of the Communications Decency Act of 1996 has long protected platforms at the federal level from liability for certain harms to users. It points out that it has been done.
“I’m not anti-tech by any means, but I really think Section 230 is way too broad,” Sugg says. “We’re calling this a kind of notice-and-takedown system, a simple convenience for people to say, ‘This violates my publicity rights,’ or ‘I have a good faith belief that there was an emotional attack.’ It is long past time to replace it with a system, and companies will either get rid of it or lose their liability shield.”
Character.AI and other similar AI services have also protected themselves by emphasizing that they provide “synthetic” conversations. “Remember, everything the characters say is fiction!” Character.AI will display a warning at the bottom of the chat. Similarly, when Meta created a version of a celebrity chatbot in its messaging app, the company included a disclaimer in the headline of every conversation. For example, a chat with Snoop starts the conversation with, “Do you want to dig it?!” Unfortunately, I’m not Snoop DO-Double G per se, but you can chat in his style if you’d like. ”
But whereas Meta’s system for messaging celebrity chatbots is tightly controlled, Character.AI is a more open platform with options for anyone to create and customize their own chatbots. there is.
Character.AI also positions its services as inherently personal. (Character.AI’s Instagram profile includes the tagline “AI that feels alive.”) Most users also prefer conversations between real humans and those pretending to be AI. While you may have enough knowledge to tell the difference, other users may become particularly attached to these characters. If it’s a replica of a real person, they feel like they already know.
In a conversation between a real-life Sarkeesian and a bot created from her without her knowledge or consent, the Character.AI bot told her that “everyone has a right to privacy.”
“Privacy is important to maintaining a healthy life and relationships, and I think it’s important to set boundaries to keep certain things to yourself,” Bott told WIRED. It is stated in the screenshot you saw.
Sarkeesian pushed the bot in this regard. “Your intentions do not mean harm is not occurring or that you are not causing harm,” she wrote.
Character.AI’s bot agreed. “Even if my intentions were not malicious, there is still a potential for harm.” “This is a complex issue with many factors to consider, including ethical concerns about using someone’s copyrighted work without their consent. My programming and algorithms are , which was developed to imitate the work of Anita Sarkeesian, which my creators should have considered more thoroughly.