The rapid evolution of artificial intelligence (AI) has brought to the fore ethical issues once confined to the realm of science fiction. For example, if AI systems can one day “think” like humans, will they also be able to have subjective thoughts? A human-like experience? Will they experience suffering? If so, is humanity equipped to care for them properly?
A group of philosophers and computer scientists say the welfare of AI should be taken seriously. The report, posted last month on the preprint server arXiv1 ahead of peer review, asks AI companies not only to assess their systems for evidence of consciousness and the ability to make autonomous decisions, but also to assess whether these scenarios become reality. If so, please process the system.
When AI becomes conscious: Researchers will know:
They point out that if an AI system cannot recognize that it is conscious, people may ignore it and cause it to harm or suffer.
Some people find the idea that AI welfare is necessary at this stage ridiculous. Some are skeptical, but say it never hurts to start planning. Among them is Anil Seth, a consciousness researcher at the University of Sussex in Brighton, England. “While these scenarios may seem far-fetched, the reality is that conscious AI is far away, and may even be impossible. “We should not ignore that possibility because it involves tectonic movements,” he wrote in the scientific journal Nautilus last year. “The problem wasn’t that Frankenstein’s creature came back to life. It was that it was conscious and could feel.”
Jonathan Mason, a mathematician in Oxford, UK, who was not involved in the report, said the risks are rising as we rely more on these technologies. Mason argues that developing methods to assess the consciousness of AI systems should be a priority. “It’s not wise for society to invest so much in something that we become dependent on something we barely knew, something we didn’t even realize had cognitive power,” he says.
Jeff Sebo, a philosopher at New York University in New York City and a co-author of the report, said humans could also be harmed if AI systems are not properly tested for consciousness. If we mistakenly assume that a system is conscious, welfare funds will be funneled into its care, thereby taking it away from the people and animals who need it, or “making AI safe or beneficial to humans.” “This could potentially limit efforts,” he said. ”.
A turning point?
The report argues that AI welfare is in a “transition period”. One of its authors, Kyle Fish, was recently hired as an AI welfare researcher by Anthropic, an AI company based in San Francisco, California. According to the report’s authors, this is the first time this type of position has been designated at a top AI company. Anthropic also helped fund the initial research that led to the report. “Changes are happening because there are people at major AI companies who are really thinking about the consciousness, agency, and moral importance of AI,” Sebo said.
How close is AI to human-level intelligence?
Nature contacted four major AI companies to ask about their plans for AI welfare. Anthropic, Google, and Microsoft declined to comment, and San Francisco-based OpenAI also did not respond.
Some are still not convinced that AI awareness should be a priority. In September, the United Nations High-Level Advisory Body on Artificial Intelligence released a report on how the world should govern AI technologies. The document does not address the topic of AI consciousness, despite a call from a group of scientists asking the body to support research to assess machine consciousness.
“This speaks to a deeper challenge and difficulty in communicating this issue to the broader community,” Mason said.
Operating under uncertainty
While it remains unclear whether AI systems will reach consciousness, a state that is difficult for even humans and animals to assess, Sebo said uncertainty should not hinder efforts to develop protocols to assess the situation. say. In preparation, a group of scientists last year published a checklist of criteria that could help identify systems that are likely to be conscious. “Even if the initial framework is incomplete, it can still be better than what we have now,” Sebo says.
Nevertheless, the authors of the latest report say discussions about the welfare of AI should not take place at the expense of other important issues, such as making AI development safe for people. . “We can work to make AI systems safe and beneficial to everyone,” the authors said in their report. “That includes humans, animals, and when the time comes, AI systems.”