Donald Trump on Sunday shared several AI-generated images of Taylor Swift and her fans expressing their support for his presidential bid, reposting them to his Truth Social platform with the caption, “I accept it!” The deepfakes are part of a wave of AI-generated images the former president has spread in recent days that straddle the line between parody and outright election disinformation.
Among the AI-generated images Trump shared over the weekend were a photo of smiling young women wearing “Swifties for Trump” T-shirts and Swift dressed as Uncle Sam urging people to vote for the Republican presidential candidate. Each image was a screenshot from X (formerly Twitter), originally posted by a right-wing account with a history of spreading misinformation. Swift does not support Trump.
Trump’s post came just days after he also posted an AI-generated image of Kamala Harris rallying communist troops at the Democratic National Convention and a deepfake video of her dancing with X owner Elon Musk, who supports her. Trump’s embrace of AI-generated images threatens to further cloud the already murky information ecosystem surrounding the 2024 presidential election. The former president routinely spreads falsehoods and conspiracy theories.
Concerns that AI-generated content could influence elections have persisted throughout the recent boom in generative artificial intelligence, and researchers have warned for years that the technology could make it easier to create disinformation campaigns and flood online platforms with low-quality content. AI-generated disinformation has circulated in elections around the world, with videos and images trolling opponents, making false endorsements, and creating deepfake audio aimed at damaging candidates.
Trump shared an AI-generated image last week, but also falsely claimed that real images of Harris’ campaign rally were AI-generated and that the well-documented rally never actually took place. His argument reflects a concept that disinformation researchers call the “liar’s dividend,” which suggests that the rise of manipulated content leads to a general skepticism of all media, making it easier for politicians and others to dismiss real images, audio and video as fake.
Most AI image-generation tools from industry mainstays like OpenAI and Microsoft put guardrails on the images they can create, such as banning images of prominent people or rejecting political image prompts, but some users have found workarounds for some AI models or turned to others without such safeguards. Musk’s Grok image-generation tool, which debuted last week, can create a range of images based on prompts that similar tools reject, leading to a recent surge in AI content around the election. This includes images of political leaders, celebrities, copyrighted works, and sexual and violent content.
Shortly after Musk released Grok’s AI image generator, deepfake images of Trump and Harris proliferated on X. Many media outlets reported that the tool could also create images of Swift, which is notable given that the AI company faced intense backlash earlier this year after a sexual deepfake of the pop star was widely circulated on social media. Swift has not endorsed any presidential candidate, but in 2020 she harshly criticized Trump for “fueling the fires of white supremacy” and vowed to vote him out of office.
Other Republican groups have also shared AI-generated imagery this election season, including the campaign of Ron DeSantis, who lost the Republican nomination. The Florida governor’s campaign shared a fake image of Trump embracing Dr. Anthony Fauci, a frequent target of conservative attacks. The Republican National Committee also sparked controversy last year when it ran a partly AI-generated attack ad that depicted a hellish scene if Joe Biden had won the election.