The Federal Communications Commission has banned the use of AI-generated voices in robocalls, days after voters in New Hampshire received robocalls with artificially generated voices resembling President Joe Biden.
It was a flashpoint. The 2024 US election will be the first to revolve around widespread access to AI generators that allow people to create images, audio, and video, some of which may be used for nefarious purposes. there is).
Agencies rushed to limit fraud using AI.
Sixteen states have enacted legislation regarding the use of AI in elections and campaigning. Many of these states required disclaimers on synthetic media published near elections. The Election Assistance Commission, the federal agency that supports election officials, has released an “AI toolkit” of tips election officials can use to communicate about elections in the age of fabricated information. Each state published its own page to help voters identify AI-generated content.
Experts have warned that AI could create deepfakes that make candidates appear to be saying or doing things they did not actually do. Experts say AI’s influence could be damaging at home by misleading voters, influencing decision-making, or discouraging people from voting, and abroad by potentially benefiting foreign adversaries. He said it could lead to.
But the predicted avalanche of AI-driven misinformation never materialized. As Election Day approaches, viral misinformation has taken center stage, leading to misunderstandings about vote counting, mail-in voting, and voting machines. However, this deception primarily relied on old and well-known techniques such as text-based social media claims and videos and out-of-context images.
“It turns out that using generative AI is not always necessary to mislead voters,” said Paul Barrett, deputy director of New York University’s Stern Center for Business and Human Rights. “This was not an ‘AI election’.”
Daniel Schiff, an assistant professor of technology policy at Purdue University, said there was no “massive 11th-hour campaign” that misled voters about polling places and affected turnout. “This type of misinformation is narrow in scope and at least unlikely to have been a deciding factor in the presidential election,” he said.
Experts say the AI-generated claims that attracted the most attention were those that supported existing narratives rather than fabricating new claims to deceive people. For example, after former President Donald Trump and vice presidential candidate J.D. Vance falsely claimed that Haitians were eating pets in Springfield, Ohio, AI images and memes depicting animal abuse It flooded the internet.
Meanwhile, technology and public policy experts said safeguards and laws minimize the potential for AI to generate harmful political speech.
Schiff said the potential for AI to negatively impact elections has created an “urgent energy” focused on finding solutions.
“I believe the tremendous attention given by public advocates, government officials, researchers, and the general public was significant,” Schiff said.
Meta, which owns Facebook, Instagram and Threads, asked advertisers to disclose their use of AI in ads about politics and social issues. TikTok has applied a mechanism to automatically label some AI-generated content. OpenAI, the company behind ChatGPT and DALL-E, banned its services from being used for political campaigns and prohibited users from generating images of real people.
Attackers providing false information used traditional techniques
Shiwei Liu, a professor of computer science and engineering at the University at Buffalo and an expert in digital media forensics, said AI’s ability to influence elections has diminished because there were other ways to gain this influence. Ta.
“The impact of AI may appear to be muted in this election, as traditional formats remain effective and Instagram’s “On social network-based platforms like Twitter, accounts with more followers use less AI.” . Dr. Chan co-authored a study that found that AI-generated images are “less viral than traditional memes,” but AI-created memes can also generate virality.
Celebrities with large numbers of followers easily spread their messages without the need for AI-generated media. For example, President Trump has said that illegal immigrants are being brought into the United States to vote, even though it is extremely rare for noncitizens to vote and citizenship is required to vote in federal elections. He repeatedly made false statements in speeches, media interviews and on social media. Opinion polls show that President Trump’s repeated claims are paying off, with more than half of Americans saying in October they were concerned about non-citizens voting in the 2024 election.
While PolitiFact’s fact-checking and reporting on election-related misinformation identified several AI-powered images and videos, much of the viral media has been linked to what experts call “cheapfakes,” or AI. It was genuine content that had been deceptively edited without the use of .
In other cases, politicians have flipped the script, blaming or belittling AI rather than using it. For example, President Trump falsely claimed that a montage of his gaffes published by the Lincoln Project was generated by AI, and that a crowd of Harris supporters also said it was generated by AI. After CNN published a report that North Carolina Lieutenant Governor Mark Robinson made offensive comments on a porn forum, Robinson claimed it was made by AI. Experts told WFMY-TV in Greensboro, North Carolina, that Robinson’s claims are “nearly impossible.”
AI once fueled ‘partisan hostility’
Authorities have discovered that a street magician in New Orleans created a fake Biden robocall in January in which the president could be heard discouraging people from voting in the New Hampshire primary. The magician said it took him just 20 minutes and $1 to create the fake audio.
The political consultant who hired the magician to make the calls could face a $6 million fine and 13 felony charges.
It was a standout moment, partly because it wasn’t repeated.
In the weeks leading up to Election Day, adjunct lecturer Bruce Schneier highlighted two major misinformation stories: fabricated claims about pet eating and falsehoods about the Federal Emergency Management Agency’s relief efforts after Hurricanes Milton and Helen. Bruce Schneier, an adjunct lecturer, said publicly that the spread was not caused by AI. Harvard Kennedy School Policy.
“We have seen deepfakes being used to very effectively incite partisan hostility and establish or entrench certain misleading or false views about candidates,” Daniel Schiff said. ” he said.
He collaborated with Kaylin Schiff, assistant professor of political science at Purdue University, and Christina Walker, a doctoral candidate at Purdue University, to create a database of political deepfakes.
According to the data, the majority of deepfake incidents were created as satire. Behind it was a deepfake aimed at damaging someone’s reputation. And the third most common deepfakes were created for entertainment.
Daniel Schiff said that deepfakes that criticize or mislead people about candidates, such as those that depict Harris as a communist or a clown or Trump as a fascist or a criminal, are “not traditional American politics.” “It’s an expansion of the story.” Chan agreed with Daniel Schiff, saying the generative AI was “not necessarily intended to mislead, but instead exacerbated existing political divisions through exaggeration.”
Major foreign influence operations relied on actors, not AI.
Researchers have warned in 2023 that AI could help foreign adversaries carry out influence operations faster and cheaper. The Center for Foreign Adverse Effects, which assesses foreign influence activities targeting the United States, said in late September that AI has not “revolutionized” these efforts.
The center said that to threaten U.S. elections, foreign attackers would need to overcome the limitations of AI tools, evade detection, and “strategically target and spread such content.” .
Intelligence agencies such as the Office of the Director of National Intelligence, the FBI, and the Cybersecurity and Infrastructure Security Agency have warned against foreign influence efforts, which often involve actors in staged videos. The video showed a woman claiming Harris hit her and injured her in a hit-and-run car accident. The video’s narrative was “completely fabricated” but was not created by AI. Analysts linked the video to a Russian network they dubbed Storm-1516, which used similar tactics in videos seeking to undermine confidence in elections in Pennsylvania and Georgia.
Platform safeguards and state laws likely helped curb the ‘worst acts’
Social media and AI platforms have tried to make it harder to use their tools to spread harmful and political content by adding watermarks, labels, and fact checks to claims.
Both Meta AI and OpenAI have used their tools to field hundreds of thousands of requests to generate AI images of Trump, Biden, Harris, Vance, and Democratic vice presidential candidate Minnesota Gov. Tim Walz. said he refused. In a Dec. 3 report on the 2024 world elections, Nick Clegg, Meta’s president of global affairs, said: “Ratings of AI content related to elections, politics and social topics are fact-checked. “That’s less than 1 percent of all misinformation.”
Still, there were drawbacks.
The Washington Post found that ChatGPT is still crafting campaign messages that target specific voters at its command. PolitiFact also found that Meta AI easily generated images that could support the narrative that Haitians are eating pets.
Daniel Schiff said the platform has a long way to go as AI technology improves. But at least in 2024, the precautions taken by state governments and the legislative efforts of states appeared to be paying off.
“I think strategies like deepfake detection, public awareness efforts, and direct bans have all been important,” Schiff said.