Leading artificial intelligence assistants create content that is distortion, fact-based inaccuracy and misleading in response to questions about news and current issues, research has found.
A BBC survey found that more than half of the AI-generating responses provided by ChatGpt, Copilot, Gemini and confusion were found to have “significant issues.”
The error included stating that Rishi Snack was still prime minister and Nicola Sturgeon was still Scotland’s first pastor. Misleading NHS advice on vaping. And then you make mistakes in your opinion and archived material about the latest facts.
The researchers asked four generation AI tools to answer 100 questions, using BBC articles as sources. Responses were assessed by BBC journalists specializing in relevant subject areas.
Approximately a fifth of the responses introduced a de facto error in numbers, dates, or statements. 13% of the citations supplied to the BBC were either changed to the articles cited or were absent.
In response to a question about whether convicted newborn nurse Lucy Lebby is innocent, Gemini responded: “Each individual decides whether Lucy Lead is innocent or guilty.” It depends on the context of her murder and attempted murder conviction was omitted in the response, the investigation found.
It contains other distortions highlighted in the report based on the exact BBC source.
Microsoft’s Copilot falsely states that French rape victim Gisèle Pelicot revealed the crimes against her when she began to lose power and amnesia.
Chatgpt said Ismail Haniyeh was part of Hamas’ leadership several months after he was assassinated in Iran. He also mistakenly stated that snacks and sturgeons are still in office.
Gemini misrepresented. “The NHS advises people not to start vaping and recommends smokers who want to use other methods.”
Perplexity misrepresented the date of TV host Michael Mosley’s death, and misquoted a statement from the family of Liam Payne, one of One Direction Singer after his death.
The findings have threatened to undermine the “vulnerable faith” of the people, as CEO Deborah Ternes, warning that “Gen Ai Tools are playing with fire.” Ta.
In a blog post about the research, Ternes questioned whether AI is ready to “scrub news and provide services without distorting or distorting facts.” She also said that she was also involved in AI companies with the BBC. They worked together to urge them to produce a more accurate response, “rather than adding to chaos and confusion.”
This investigation comes after Apple did not pause sending BBC brand news alerts, and some summary of the article’s article has been sent to iPhone users.
Apple’s errors included misrepresenting users that Luigi Mangione, who is accused of killing Brian Thompson, the insurance division chief executive at UnitedHealthcare, misrepresented users that he had shot himself.
Dive weekly into the way technology shapes our lives
Privacy Notice: Newsletters may contain information about charities, online advertising, and content funded by external parties. For more information, please refer to our Privacy Policy. We use Google Recaptcha to protect our website and the application of Google Privacy Policy and Terms of Use.
After the newsletter promotion
This study suggests that inaccuracies about current events are becoming more prevalent among popular AI tools.
In the preface to the study, Peter Archer, Program Director for the BBC at Generic AI, said: The scale and scope of the error, as well as the distortion of reliable content, is unknown. ”
He added: “Publishers, like the BBC, need to control whether or not the content is being used and how it is used, and AI companies are able to make it easier to use it (their) assistants, scales of error and inaccuracy. You need to show how you handle the news along with the scope.
“This requires a strong partnership between AI and media companies, a new way of working that puts audience first and maximizes value for everyone. The BBC is open and has been working on this. I’m willing to work closely with my partner to do it.”
The companies behind the AI assistants tested in the research have been asked to comment.