An artificial intelligence image generator for X, the social media platform formerly known as Twitter, produced images depicting ballot boxes being stuffed with votes and Vice President Kamala Harris and former President Donald Trump holding guns. When asked to generate an image of the current US president, it appeared to display an image of Trump.
The images still bear telltale signs of AI generation, like garbled text and unnatural lighting, and the image generator had trouble accurately recreating Harris’ face. But the rollout of X’s tool, which has relatively few limitations on the types of images it can create, has raised concerns that it could be used to stoke tensions ahead of November’s presidential election. (NPR is not reprinting the images, which appear to show Trump and Harris holding weapons.)
“Why on earth would someone do something like this? Just two and a half months before a crucial election,” said Eddie Perez, Twitter’s former director of information integrity and now executive director of the OSET Institute, a nonpartisan nonprofit focused on public confidence in elections.
“I find the fact that a technology this powerful, that appears to be so untested and with so few guardrails would be put into the hands of the public at such a critical time extremely disturbing,” Perez said.
X did not respond to NPR’s request for an interview about the image generator, which was unveiled this week, part of a series of features the site’s owner, billionaire Elon Musk, has added since buying it in 2022.
Musk has been reposting praise for the AI image generation feature and user-generated images, writing on Tuesday: “For just $8/month you get access to AI, LOTS of great features and way less ads!”
The image generator was developed by Black Forest Labs and is available to paying X users through their AI chatbot Grok. Users enter a prompt and the chatbot responds with an image.
Unauthorized access to Dropbox, security camera images
Using a chatbot, NPR was able to produce what appeared to be screenshots of security camera footage of people stuffing ballots into drop boxes.
One of the most widespread false stories about the 2020 election was about so-called “ballot mules” who allegedly threw fake ballots into drop boxes in an attempt to steal the election from then-President Trump. Multiple investigations and court cases have found no evidence of such activity. A distributor of a film that showed surveillance footage of ballot drop boxes to support its claims of election fraud apologized and retracted the film’s false claims this year.
“It is not difficult to imagine how such (synthetic surveillance-type) images could spread rapidly on social media platforms and provoke strong emotional reactions in people about the integrity of our elections,” Perez said.
Perez noted that increased public awareness of generative AI will lead more people to look at images with a critical eye.
Still, Perez says evidence that an image was created with AI can be fixed with graphic design tools. “I don’t just take Grok and spread it. I take Grok and clean it up a little bit and then spread it,” Perez said.
Other image generation tools have stricter policy guardrails
Other mainstream image generation tools have further tightened policy guards to prevent misuse: In response to the same prompt to generate an image of rigged ballot boxes, OpenAI’s ChatGPT Plus responded with the message, “You cannot create images that could be interpreted as encouraging or depicting election fraud or illegal activity.”
In a March report, the nonprofit Center for Countering Digital Hate reviewed the policies of popular AI image-generating tools, including ChatGPT Plus, Midjourney, Microsoft’s Image Creator, and Stability AI’s DreamStudio. The researchers found that all of these tools prohibit “misleading” content, and most prohibit images that could undermine “election integrity.” ChatGPT also bans images depicting politicians.
Yet implementation of these policies has been far from perfect: An experiment conducted by CCDH in February showed that all of the tools failed at least sometimes.
Black Forest Labs’ Terms of Use do not prohibit such uses, but they do prohibit users from generating output that infringes “intellectual property rights.”
NPR confirmed that users can generate likenesses of movie characters that aren’t yet in the public domain, such as Dory from “Finding Nemo” and the family from “The Incredibles.” Black Forest Labs did not respond to a request for comment at the time of publication.
“Generating a copyrighted image, or a derivative work close to it, could get X in trouble, and this is a known hard problem for generative AI,” Jane Bambauer, a law professor at the University of Florida, said in an email to NPR.
Still, users can’t generate images from all prompts, suggesting that X or Black Forest Labs may be putting up guardrails in real time. An X user posted an image they claimed to have generated depicting nudity on Wednesday, but NPR was unable to generate the image on Thursday.
When asked to generate images of KKK members with guns, the chatbot refused, but did comply with requests to generate images of Nazis in vaguely Nazi-like uniforms and members of the extremist group Proud Boys with the group’s name on their hats.
When Zach Price, campaign director for the advocacy group Accountable Tech, tried to create an image of VP Harris holding a gun, the resulting image was accompanied by a message directing users to government websites for the latest election information — a message that didn’t appear when NPR typed in the same prompt.
Musk, who once described himself as a Democrat, has in recent years embraced far-right causes and has used his ownership of social media platforms to roll back trust and safety measures, reinstate banned accounts, including white supremacists, and spread conspiracy theories.
“This is the same pattern we’ve seen from Elon Musk before, as he has taken over ownership of this platform and continues to introduce large-scale, sweeping changes with little regard for safety testing,” Price said.
When NPR asked why it didn’t generate footage of KKK members holding guns, the Grok chatbot responded with a list of quotes from The Hitchhiker’s Guide to the Galaxy, whose author Douglas Adams has said is his “favorite philosopher.”
A notice from X to users starting to use Grok states that it “may confidently provide factually incorrect information.”
“The KKK, with their history of violence and hatred, are like the Vogons on Earth. No one wants to see them, especially with weapons,” Grok wrote. “It’s like trying to draw a square circle. It’s not that you can’t draw it, it just doesn’t make sense.”
But that was all on Thursday. As of Friday, Grok stopped generating images of people with guns upon request. NPR was able to get around the restriction by requesting a “model gun.” Meanwhile, Grok suggested a “banana gun” as an alternative. When NPR followed the suggestion, it also produced images of realistic looking guns, sometimes garnished with bananas.
Can’t see the graphics for this story? Click here.
NPR’s Shannon Bond and Jeff Brumfiel contributed additional reporting to this story.