As unchecked artificial intelligence continues to seep online, a new problem has emerged: a proliferation of websites that look like news sites but are automated by AI. Hearst Television’s National Investigative Unit spoke to several Secretaries of State, who are ultimately responsible for each state’s elections. They said there are real concerns that stealthy use of AI in this way could affect the outcome of elections across the country. Maine’s Secretary of State Targeted Maine’s Secretary of State, Shenna Bellows, knows the impact of misinformation all too well. Online articles from supposed news sites said she was arrested, served time in Guantanamo Bay, and executed. But none of it was true. “We were in the awkward position of having to explain to some media outlets, like USA Today, and some voters, ‘No, I’m alive and well, you know, I’m doing my job in Maine,'” Bellows said. These stories can clearly have a lot of power. Secretary of State Bellows is responsible for Maine’s election operations. That’s why they’re bombarding her with articles filled with misinformation, from both humans and AI. The articles on these websites are written by bots, and they publish potential misinformation and disinformation with little human oversight. A Growing Problem According to Steven Brill, co-founder and CEO of NewsGuard, an organization that provides tools to combat misinformation, AI-powered websites are becoming a growing problem. “We started out with 40 or 50, and then it exploded,” Brill told the National Bureau of Investigation. “A month later, we might have 1,200. If you’re right before Election Day, we’re sure we’re over 2,000.” Brill’s team is tracking more than 1,000 AI-run websites that spread false information. The websites the organization tracks appear to be mostly automated, using scripts to scrape real news sites and then using AI to rewrite articles and distort the facts. These sites can garner a lot of attention on the internet, especially through social media. “They can get thousands, hundreds of thousands, millions of views,” Brill said. “So the website itself is kind of a beacon of false legitimacy.” Brill said many of the websites are targeted at the election. These websites can be hard to distinguish from legitimate news sites because they have generic names that are close to trusted news brands and are laid out in a way that mimics them. Some of these AI-powered sites have multiple sections, articles, author bylines, and ads. “You can imagine how much that will accelerate as the election gets closer,” Brill says. What’s being done to stop AI and misinformation online? Social media websites like Facebook and X (formerly Twitter) have pledged to flag articles that contain misinformation and disinformation. Some states have passed laws, adopted resolutions, or enacted legislation to address the use of AI in various ways. Check out the map below to see the laws across the country. How can you tell if a website is AI-generated? AI is constantly evolving, so over time it will be harder to distinguish these sites from those created by humans. That’s why it’s important to choose a trusted news source. Below are some basic tips and red flags to evaluate news sites: Check the “About Us” and “Privacy Policy” pages Most credible news sites have “About Us” and/or “Privacy Policy” pages that provide transparent information to readers about the organization and its ethics. Readers can find our pages here. However, some AI sites miss important details on these pages. Some sites may state that these pages are still under development. For example, “This website was founded by in .” The more general the language, the more likely automation is involved. Although rare, some websites may actually state that they were generated by AI or that the content was written for “satirical” purposes and not factual. No author biography? Type the author’s name into Google. If there is no past work, that’s a red flag. Experts say that AI “authors” create more content over time, so be critical when looking at past work. It’s also a red flag if the article is listed as written by “administrators” and “editors” or if there is no byline at all. Readers may want to look into the author more closely. Hearst Television’s national investigative unit found that at least one website used the byline of a real reporter, but upon further investigation, found no real connection between those reporters and the suspicious articles. Scan for formal or out-of-place sentences “In conclusion” is a common phrase that appears at the end of many AI articles, but is rarely used by human journalists. Experts say that artificial intelligence systems aim to generate “helpful” responses and may include language that is not conversational. Another red flag is when the language used is too formal or jargon-heavy. Sentences repeated multiple times AI-generated articles may contain the same concept or entire sentences in the copy. Human journalists and editors may have removed or revised these statements. Check the context AI does not have the same understanding of the world as humans do. So if you think an article cannot understand the broader context or misses the point entirely, think twice. Experts say that AI tries to predict the next word in a phrase or sentence, not derive facts. These articles often miss the broader context. Check your sources: Legitimate news articles often use citations and hyperlinks to link to their sources, and if you don’t see a headline anywhere else, be suspicious. Look for other sources, especially ones you’ve heard of and trust.
Washington –
As unchecked artificial intelligence continues to infiltrate online, a new problem is emerging: the proliferation of websites that look like news sites but are automated by AI.
Hearst Television’s National Investigative Unit spoke with several secretaries of state, who are ultimately responsible for running each state’s elections, who said there are real concerns that covert use of AI in this way could affect the outcome of elections across the country.
Maine Secretary of State Targeted
Maine Secretary of State Shenna Bellows understands the impact of misinformation all too well.
Online articles from supposed news sites say she was arrested, imprisoned in Guantanamo Bay, and executed. None of this is true.
“We were in the awkward position of having to explain to media outlets like USA Today and to voters, ‘No, I’m alive and well, and as you know, I’m working in Maine,'” Bellows said.
Clearly, these stories can be very powerful.
As Secretary of State, Bellows is responsible for administering Maine’s elections.
This has resulted in her being targeted by humans and AI alike, who have published articles full of misinformation about her.
Articles on these websites are written by bots, which can lead to the publication of misinformation and disinformation with little or no human oversight.
A growing problem
AI-powered websites are becoming an increasing problem, according to Steven Brill, co-founder and CEO of NewsGuard, an organization that provides tools to combat misinformation.
“It started out as 40 to 50 people, and then it exploded,” Brill told the National Bureau of Investigation. “A month later, it could be 1,200. If we’re talking right before Election Day, it must be over 2,000.”
Brill’s team has tracked more than 1,000 websites run by AI that spread false information. The websites it tracks appear to be primarily automated, using scripts to scrape real news sites and then using AI to rewrite articles and distort the facts.
These sites can garner a lot of attention on the internet, especially through social media.
“They can potentially get thousands, hundreds of thousands, millions of views,” Brill said, “so the website itself is just a kind of marker of false legitimacy.”
Brill said many of the websites are targeted at the election.
These websites can be difficult to distinguish from legitimate news sites because they are generically named and laid out in a way that mimics trusted news brands. Some of these AI-powered sites have multiple sections, articles, author bylines, and advertisements.
“You can imagine how that will accelerate as we get closer to the election,” Brill said.
What is being done to combat AI and online misinformation?
Social media websites such as Facebook and X (formerly Twitter) have pledged to flag articles containing misinformation or disinformation.
Some states have enacted laws, adopted resolutions, or enacted legislation to address the use of AI in various ways.
Check out the map below to see the laws across the country.
How do you know if a website is AI-generated?
AI is constantly evolving, so over time it will become harder to distinguish these sites from those written by humans, which is why it’s important to choose trustworthy news sources.
Below are some basic tips and things to look out for when evaluating news sites.
Please check the “About” and “Privacy Policy” pages
Most credible news sites have an “About Us” or “Privacy Policy” page that provides transparent information to readers about the organization and its ethics. Readers can visit our page here.
However, some AI sites miss important details about these pages. Some sites may state that these pages are still under development. For example, “This website was founded by (your name) on (date).” The more general the language, the more likely automation is involved. Although rare, some sites may actually state that a website is generated by AI, or that the content is written for “satirical” purposes and not factual.
Any background info on the author?
Type the author’s name into Google. If there is no previous work, that’s a red flag. Experts say that AI “authors” will create more content over time, so you need to be critical when looking at their previous work.
Other red flags include articles that list the authors as “Administrators” and “Editors,” or no bylines at all.
Readers may want to find out more about the authors: Hearst Television’s national investigative unit found that at least one website was using the byline of an actual reporter, but upon further investigation found no actual connection between those reporters and the questionable articles.
Scan for overly formal or out-of-place text
“In conclusion” is a common phrase at the end of many AI articles, but it’s rarely used by human journalists.
Experts say AI systems aim to provide “helpful” responses that can include non-conversational language — another red flag is if the language used is too formal or full of technical jargon.
Repeated sentences
AI-generated articles may contain duplicate concepts or sentences that a human journalist or editor would likely have removed or revised.
Check the context
AI doesn’t have the same understanding of the world as humans, so if your story seems like it can’t understand the larger context or doesn’t get the point across at all, think twice.
Rather than deriving facts, experts say the AI tries to predict the next word in a phrase or sentence, often missing the broader context of such articles.
Check the source
Legitimate news articles frequently use citations and hyperlinks to link to sources.
Also, if the headline isn’t published anywhere else, be skeptical, especially looking for other sources that you’ve heard of and trust.