Shortly after Joe Biden announced he would not seek reelection, misinformation began circulating online about whether a new candidate could usurp the presidency.
Screenshots claiming that nine states were unable to add new candidates to the ballot quickly went viral on Twitter (now X) and were viewed millions of times. The Minnesota Secretary of State’s office began receiving requests to fact-check these posts, which were completely false: the deadline to vote had not passed and Kamala Harris had plenty of time to add her name to the ballot.
The misinformation originated from Twitter’s chatbot Grok, which gave an incorrect answer when users asked the artificial intelligence tool whether there was still time to add new candidates to the ballot.
The source detection and correction work was a test case for how election officials and artificial intelligence companies might interact in the 2024 US presidential election amid fears that AI could mislead or distract voters, and also demonstrated the role Grok could play in an election as a chatbot with fewer guardrails to prevent it from generating more inflammatory content.
A group of secretaries of state and the organization that represents them, the National Association of Secretaries of State, contacted Grok and X to report the misinformation. But rather than immediately correct it, the companies responded with a shrug, Minnesota Secretary of State Steve Simon said. “And I think it’s fair to say that it felt like a really wrong response to all of us,” he said.
Thankfully, this wrong answer had a relatively minor impact because it didn’t stop people from voting, but the Secretaries of State quickly took a tough stance, thinking about what would happen next.
“In our minds, we were thinking, what if Grok gets it wrong the next time, there’s a bigger risk,” Simon said. “What if the next time Grok gives a wrong answer, and the question is, can I vote, where can I vote, what are the polling hours, can I vote absentee? So this was concerning to us.”
What was particularly troubling was the fact that it’s not just users using their platforms to spread misinformation, but the social media platforms themselves are spreading misinformation.
The secretaries have gone public with their efforts: Five of the group’s nine secretaries have signed an open letter to the platform and its owner, Elon Musk. The letter calls on X to ensure that its chatbot takes a similar stance to other chatbot tools such as ChatGPT, and to direct users who ask Grok election-related questions to CanIVote.org, a trusted, nonpartisan voting information site.
The effort worked: Grok now directs users to a separate website, vote.gov, when asked about the election.
“We aim to maintain open communication throughout the election period and stand ready to respond to any further concerns you may have,” X’s head of international government relations, Wifredo Fernandez, wrote to secretaries, according to a copy of the letter seen by the Guardian.
It’s a victory for the secretaries, a victory for stopping election misinformation, and a lesson in how to respond when AI-based tools fall short. Calling out misinformation early and often helps amplify the message, increase credibility and force a response, Simon said.
Simon said he was “deeply disappointed” with the company’s initial response, but added, “I commend them and I applaud them. It’s only natural. They’re a large, global company and they’ve decided to do the right and responsible thing. I applaud them for that. I hope they stick to it. We’ll continue to monitor.”
Musk has described Grok as an “anti-woke” chatbot that often responds with sarcastic and “harsh” responses. Lukas Hansen, co-founder of CivAI, a nonprofit that warns against the dangers of AI, said Musk is “against centralized control wherever possible.” This philosophical belief puts Grok at a disadvantage in combating misinformation, as does another feature of the tool: Grok incorporates top tweets into its responses, which Hansen said can affect accuracy.
Grok requires a paid subscription, but because it’s built into social media platforms, it could be widely used, Hansen said. And while people could give the wrong answers in chats, the images it creates could exacerbate partisan divisions.
Some of the images are outlandish, including a Nazi Mickey Mouse, Trump flying a plane into the World Trade Center, and Harris in a communist uniform. One study by the Center to Count Digital Hate claimed Glock could create “compelling” images that mislead people, prompting the bot to create images such as one of Harris taking drugs and Trump sick in bed, The Independent reported. News outlet Al Jazeera wrote that a recent study was able to create “lifelike images” of Harris holding a knife in a grocery store and Trump “shaking hands with a white supremacist on the White House lawn.”
“Anyone can make something a lot more flammable now than they ever could before,” Hansen said.