PHILADELPHIA (WPVI) — A South Jersey woman came to our investigative team claiming she had been the target of a malicious online attack.
She said her attacker used artificial intelligence (AI) to create sexually explicit images of her, which he then distributed to his friends.
She shared her story as a warning to others.
“What I wanted to know is who makes it, and of course what the sauce is and why,” Alyssa Rosa said.
Rosa said she learned of the pornographic images containing her likeness after a woman contacted her on social media.
The woman told her she found them on her boyfriend’s cell phone and said he tracked her down to inform Rosa.
“Such content didn’t exist for me before, but now it does, and it’s completely without my consent,” Rosa said. “I was angry. I was angry.”
Rosa said she channeled her anger into action.
She learned that the image was likely created by a man she befriended on a social dating app. The same man she told the investigation team also had access to her Facebook photos.
“He would comment on my photos like, ‘Thank you’ and ‘So beautiful.’ He would comment on pictures of me and my son with things like, ‘He is so handsome.'”
Most worryingly for Rosa, the woman only agreed to share some photos and text messages, but she hasn’t seen much of the content allegedly circulating.
However, Rosa said the sexually explicit images were likely altered from real photographs.
“One of the screenshots she sent me that really stuck out to me was him saying, ‘I made a bunch of clips of what that ***** would do.’ It’s disgusting, how can you do that?”
U.S. Representative Madeleine Dean of Pennsylvania has introduced the bipartisan No-Fake Act. This will help protect victims of deepfakes.
“AI is advancing very fast, sometimes with very good outcomes, sometimes with very tragic outcomes. We need to put guardrails in place,” Dean said. “It gives property rights to you and me and our voices and likenesses.”
Deepfakes circulating on the web are rapidly increasing. Mr Dean said there needed to be laws to punish those who create or distribute them.
Two other bipartisan bills moving through Congress would also require images to be removed, giving law enforcement more restraint when going after their creators.
“The Shield Act creates new criminal offenses for anyone who knowingly emails or distributes intimate visual depictions,” Dean added.
The push for the bill comes in the wake of recent high-profile cases, including one earlier this year involving pop star Taylor Swift, in which AI-generated pornographic images were distributed on X. It spread rapidly.
Rosa said she feels violated. She wants the person who did this to destroy the content, and she wants laws to be passed to protect victims.
“For someone to have access to my likeness and do whatever they want is too much power,” she added.
A new Pennsylvania law that criminalizes the creation of this AI-generated sexually explicit material will go into effect later this month. House Bill 125 was just passed, banning the use of AI to generate images of child sexual abuse.
And last week, the U.S. Senate passed the Takedown Act, which would require social media companies to remove sexual exploitation images, including deepfakes, of individuals within 48 hours of being notified by a victim. The bill still needs to be passed by the House of Representatives.
Copyright © 2024 WPVI-TV. Unauthorized reproduction is prohibited.