LAWRENCE — As artificial intelligence becomes increasingly involved in journalism, journalists and editors are grappling not only with how to use the technology, but also how to disclose that use to readers. New research from the University of Kansas shows that when readers believe that AI is somehow involved in news production, they trust the news even if they don’t fully understand how AI is contributing. It was found that trust in sexuality decreased.
This finding shows that readers are aware of the use of AI in news production, even if they perceive it negatively. But researchers say understanding what and how technology contributes to the news can be complex, and how to disclose it in a way that readers can understand is difficult. He says it’s an issue that needs to be addressed.
“We know that the increasing concentration of AI in journalism is an issue that journalists and educators are talking about, but we were interested in how our readers perceived it. So we wanted to learn more about the perception of media signatures and their impact, or how people think about news generated by AI.”・Associate professor at the School of Mass Communication, 2 said Alyssa Appelman, co-author of the study. About the topic.
Appelman and Steve Bien-Aimé, an assistant professor in the William Allen White School of Journalism and Mass Communication, led an experiment that exposed readers to news articles about the artificial sweetener aspartame and its safety for human consumption. cooperated with. Readers will see what our staff writers have written, what our staff writers have written using artificial intelligence tools, what our staff writers have written with the assistance of artificial intelligence, and what our staff writers have written in collaboration with artificial intelligence. , one of the five bylines was randomly assigned to be written by an artificial intelligence. Otherwise, the articles were consistent in all cases.
The findings were published in two research papers. Both were authored by Appelman and Bien-Aimé of KU, Haiyan Jia of Lehigh University, and Mu Wu of California State University, Los Angeles.
One paper focused on how readers understand AI bylines. After reading the article, readers were surveyed about what the specific bylines they received meant and whether they agreed with several statements aimed at measuring media literacy and attitudes toward AI . The findings showed that regardless of the signatures they received, participants had a broader view of what the technology would do. Although the majority reported feeling that humans were the main contributors, some believed that AI might have been used as a research aid or in writing the first draft, which was edited by humans.
The results showed that participants understood what AI technology could do and that it could be guided by a human using prompts. However, the different signature conditions left room for people to interpret how specifically it contributed to the article they read. When AI contributions were mentioned in the byline, it negatively affected readers’ perceptions of the source and author’s credibility. Even with a byline saying “Written by a staff writer,” readers interpreted it to mean it was at least partially written by an AI, since there were no human names involved in the story.
Readers used sensemaking as a technique to interpret AI’s contributions, the authors write. This tactic is a way to use information you have already learned to make sense of unfamiliar situations.
“People have different ideas about what AI means, but if it’s not clear what the AI did, people will fill in the gaps about what they thought the AI did.” said Appelman.
They found that opinions about the credibility of news were negatively affected, regardless of how people thought AI influenced the story.
The results of this survey were published in the journal Communication Reports.
The second research paper investigated how perceptions of humanity influence the relationship between perceptions of AI’s contribution and judgments of trustworthiness. They found that acknowledging AI increases transparency and feels that human contributions to news make it more trustworthy to readers.
Participants reported how often they believed AI was involved in the creation of their articles, regardless of the signature condition they received. The higher the percentage they give, the lower their reliability judgment. Even those who read “Written by Staff Writers” reported feeling that AI was involved to some degree.
“What mattered was not whether it was AI or humans, but how much work they thought humans were doing,” Bienheime said. “This shows us that we need to be clear. I think journalists have a lot of assumptions in our field that consumers know what we do. Often they don’t.”
This finding suggests that people feel more trust in human contributions in traditionally human fields such as journalism. When it is replaced by technology such as AI, it can impact perceptions of trustworthiness, but traditionally does not involve humans, such as YouTube, which suggests videos to watch based on previous views. The authors said that some items may not be affected. .
While it can be interpreted positively that readers tend to perceive news written by humans as more trustworthy, journalists and educators need to decide how or whether to use AI. You also need to understand that it needs to be clearly disclosed. Transparency is a healthy practice, as evidenced by the scandal earlier this year in which Sports Illustrated allegedly published AI-generated articles that appeared to be written by humans. But researchers say that simply stating that an AI was used may not be clear enough for people to understand what the AI did, and if they feel that the AI contributed more than humans, They argue that this can have a negative impact on perceptions of trustworthiness.
The findings on authorship and humanness were published in Computers in Human Behavior: Artificial Humans.
The authors say that both journal articles suggest that further research into how readers perceive the contributions of AI to journalism is needed, and that journalism as a field is likely to It said it also suggests it could benefit from improvements in how practices are disclosed. Appelman and Bienheime study readers’ understanding of various journalistic practices, asking readers to understand what certain disclosures mean, such as corrections, bylines, ethics training, and the use of AI. We discovered that people often do not recognize things in a way that is consistent with their intentions.
“Part of our research framework has always been to assess whether readers are aware of journalists’ work,” Bien-Aimé said. “And we want to continue to gain a deeper understanding of how people view the work of journalists.”