new york
CNN
—
Apple is temporarily disabling a newly introduced artificial intelligence feature that summarizes news notifications after it repeatedly sent users error-filled headlines, sparking a backlash from news organizations and press freedom groups.
The iPhone maker’s unusual reversal on its much-touted Apple Intelligence feature comes after the technology produced misleading or outright false summaries of news headlines that looked almost identical to regular push notifications. Ta.
Apple on Thursday introduced a beta software update for developers that disables AI features in News and Entertainment Headlines, but plans to roll it out to all users later as it works to improve the AI features. The company plans to re-enable this feature in a future update.
As part of the update, the Apple Intelligence overview, which users must opt in to, will more clearly highlight that the information was generated by AI, and that AI may produce inaccurate results. He said that it suggests that.
Last month, the BBC complained to Apple about the technology and removed the feature after it made a false headline that Luigi Mangione, who is charged with murder in the death of United Healthcare CEO, shot himself. We asked the company to do so. On another occasion, three New York Times articles were also combined into one push notification that falsely reported that Israeli Prime Minister Benjamin Netanyahu had been arrested.
A BBC spokesperson told CNN in December that “news accuracy is essential to maintaining trust, so it’s important that Apple addresses these issues quickly.” . These AI summaries by Apple do not reflect the original BBC content and in some cases are outright contradictory. ”
On Wednesday, the AI-powered feature once again incorrectly summarized a Washington Post notice, falsely stating, “Pete Hegseth has been fired. He’s fired.” Trump tariffs will impact inflation. Pam Bondi and Marco Rubio admitted that none of these are true.
Jeffrey Fowler, the paper’s technology columnist, said: “This is my regular report that Apple Intelligence is so bad that today its AI got all the facts wrong in the Washington Post’s breaking news summary. That’s abusive language,” he wrote. “Until Apple gets a little better at this AI, it would be irresponsible not to turn off summarization in news apps.”
Press freedom organizations have also highlighted the danger the summary poses to consumers seeking reliable information, with Reporters Without Borders calling it a “danger to the public’s right to reliable information on current events” and The National Union of Journalists, one of the largest journalists in the world, called the summary “a danger to the public’s right to reliable information about current events.” Trade unions around the world stressed that “people should not be put in a position where they second-guess the accuracy of the news they receive.” They called for the AI-powered summary to be removed.
Apple’s AI problems are not the first time developers have had to contend with technology that fabricates information, with popular models like ChatGPT often convincingly causing “hallucinations.” there is.
Brown University professor Suresh Venkatasubramanian, co-author of the White House’s AI Bill of Rights blueprint, said the technology behind AI tools, large-scale language models, can provide “plausible answers” to prompts. He says the robot is trained to respond to input using “.” he previously told CNN.
“So in that sense, any answer that sounds plausible, whether it’s accurate or factual, made up or not, is a reasonable answer, and that’s what produces the result,” Venkatasubramanian said. he said. “There is no real knowledge there.”
Two years after the launch of ChatGPT, AI illusions are still prevalent. A July 2024 study by Cornell University, the University of Washington, and the University of Waterloo found that top AI models are still not completely reliable, given their propensity to invent information.