As artificial intelligence (AI) becomes increasingly integrated into journalism, newsrooms face the dual challenge of effectively using the technology and transparently disclosing its engagement to readers.
A new study from the University of Kansas (KU) reveals that readers often view the role of AI in news production negatively, even when they don’t fully understand its specific contributions. Ta. This perception can reduce trust in the reliability of news.
The study, led by researchers Alyssa Appelman and Steve Bienme from KU’s William Allen White School of Journalism and Mass Communication, explores the impact of AI on readers’ perceptions of its engagement and credibility with news stories. We are investigating how to interpret
AI byline and reader recognition
Appelman and Bien-Aimé, along with collaborators Haiyan Jia of Lehigh University and Mu Wu of California State University, will investigate how different AI-related bylines affect readers. An experiment was conducted.
Participants were randomly assigned one of five bylines regarding an article about the safety of the artificial sweetener aspartame. These bylines ranged from “written by staff writers” to “written by artificial intelligence,” with some variations indicating collaboration or assistance from AI.
The researchers found that readers interpreted these bylines in different ways. Even though the byline simply said “Written by a staff writer,” the lack of a named human author led many readers to believe that AI played a role in the article’s creation.
Participants used prior knowledge to understand AI’s potential contribution and often overestimated AI’s involvement.
“People have different ideas about what AI means, but if it’s not clear what the AI did, people will fill in the gaps about what they thought the AI did.” explained Appelman.
AI and trust: a complex relationship
Regardless of interpretation, participants consistently rated news articles as less trustworthy if they believed artificial intelligence was involved. This effect persisted even when the byline clearly identified human contributions alongside AI assistance.
Readers seem to prioritize perceived human involvement when evaluating the credibility of an article.
“It wasn’t about AI or humans, it was about how much work they thought humans were doing,” Bienheime said.
The findings highlight the importance of clear and accurate disclosure about the role of AI in news production.
Transparency is important, but simply stating that AI was used may not be enough to alleviate readers’ concerns. If readers perceive that AI contributes more than humans, trust in the news may diminish.
Transparency and ethical considerations
These studies highlight the need for greater transparency and improved communication regarding the use of AI in journalism.
Recent controversies, such as Sports Illustrated’s alleged publication of AI-generated articles as if they were written by humans, highlight the risks of insufficient disclosure.
The study also suggests that readers may be more accepting of AI in situations where traditionally human roles have not been replaced. For example, algorithmic recommendations on platforms like YouTube are often perceived as helpful rather than intrusive.
However, in fields such as journalism, which have traditionally valued human expertise, the introduction of AI can create skepticism about the quality and reliability of work.
“Part of our research framework has always been to assess whether readers are aware of journalists’ work,” Bien-Aimé said. “And we want to continue to gain a deeper understanding of how people view the work of journalists.”
Need for improved reader education
Appelman and Bienhame’s findings point to gaps in readers’ understanding of journalistic practices. Disclosures about AI involvement, remediation, ethics training, bylines, etc. are often interpreted by readers differently than journalists intended.
To close this gap, researchers highlight the need for journalists and educators to better communicate the details of how AI is used in news production.
“This shows us that we need to be clear. I think journalists have a lot of assumptions in our field that consumers know what we do. Often they don’t,” Bienheime said.
Implications for journalism and future research
Both studies call for further investigation into how readers perceive the role of AI in journalism and how that perception influences their trust in the media. Understanding these dynamics can help journalists improve their practices to maintain credibility while leveraging the potential of AI.
As AI continues to shape the future of journalism, the field must strike a fine balance between technological innovation and maintaining public trust.
Transparency, clear communication, and ethical practices are essential for AI to serve as a tool that enhances rather than undermines news credibility.
The research results are published in Communication Reports and Computers in Human Behavior: Artificial Humans.
—–
Like what you read? Subscribe to our newsletter for fascinating articles, exclusive content and the latest updates.
Check us out on EarthSnap, the free app from Eric Ralls and Earth.com.
—–