It’s no exaggeration to say that this mountain of evidence from first-hand sources carries more weight than the marked-up images from conservative commentators like Chuck Calest and Dinesh D’Souza, who have been busted in the past for spreading election disinformation.
When it comes to accusations of AI fakery, the more sources, the better. While a single source can easily generate a plausible picture of an event, multiple angles of the same event from multiple independent sources make it much less likely that they are committing the same fakery. Photos that match video evidence are even better, especially since creating convincing long-form videos of people and complex scenes remains a challenge for many AI tools.
It’s also important to track the original source of any AI imagery you’re looking at. It’s all too easy for social media users to create an AI-generated image, claim it came from a news report or live footage of an event, and then use the obvious flaws in that fake image as “proof” that the event itself was faked. A link to the original image from the original source’s own website or verified account is much more credible than a screenshot that you don’t know where it came from (and/or that someone has altered).
Symptoms
For major news events like presidential rallies, tracking original or corroborating sources is useful, but verifying the authenticity of images or videos coming from a single source can be difficult. Tools like Winston AI Image Detector and IsItAI.com claim to use machine learning models to determine if an image is AI or not. However, while detection techniques continue to evolve, these types of tools are generally based on unproven theories that have not been proven reliable through extensive research, making the possibility of false positives/false negatives a real risk.
Hany Farid, a professor at the University of California, Berkeley, wrote in a LinkedIn post that two GetReal Labs models showed that the photo posted by Trump of Harris’ rally “shows no evidence of AI generation,” and he cited specific features that point to the photo’s authenticity.
“The lettering on the signs and planes lack any of the typical hallmarks of generative AI,” Farid wrote. “The lack of evidence of manipulation is not evidence of the image’s authenticity. We have found no evidence that the image was generated by AI or digitally altered.”
And if some of your photos seem like nonsensical signs of AI manipulation (such as the deformed hands in some AI image models), consider that some seeming optical illusions may have simple explanations. The BBC points out that the lack of crowds on planes in some of the Harris rally photos could be because, when you look at the scene at a reverse angle, there’s a wide, empty runway between the plane and the crowd. Simply circling something that looks odd in a photo with a red marker is not, in and of itself, necessarily strong evidence of AI manipulation.