A major “social disconnect” is looming between those who believe that artificially intelligent systems are conscious and those who say they do not feel anything, a leading philosopher has said.
The comments by Jonathan Birch, a philosophy professor at the London School of Economics, come as governments prepare to gather in San Francisco this week to accelerate building guardrails to address AI’s most serious risks. It was announced on .
Last week, a group of transatlantic academics predicted that the dawn of consciousness in AI systems is likely to occur by 2035, and that as a result, there is a “subculture” in whether computer programs owe similar welfare rights. “This could result in the parties viewing each other as having made a huge mistake,” he said. As humans or as animals.
Birch said he is “concerned about a huge division in society” as people disagree about whether AI systems can actually express emotions such as pain or joy.
Discussions about the influence of sentience in AI have echoes of sci-fi films in which humans grapple with the emotions of AI, such as Steven Spielberg’s AI (2001) and Spike Jonze’s Her (2013). there is. AI safety groups from the US, UK and other countries will meet with tech companies this week to develop stronger safety frameworks as the technology advances rapidly.
Views of animal sensibilities already vary widely between countries and religions. For example, between India, where hundreds of millions of people are vegetarian, and the United States, one of the world’s largest meat consumers. Views on the sentience of AI may diverge along similar lines, while the views of theocratic states like Saudi Arabia that have established themselves as AI hubs may also differ from those of secular states. The issue can also cause tension within families, as people who form close relationships with chatbots or AI avatars of deceased loved ones clash with relatives who believe that only living creatures are conscious. There is sex.
Birch, an expert in animal perception, was a pioneer in leading to the growing ban on octopus farming and co-authored a study that included New York University, Oxford University, Stanford University, Eleos and anthropologists and AI experts. There is also. AI companies say the prospect of AI systems with self-interest and moral significance is “no longer just a matter of science fiction or the distant future.”
They say big tech companies developing AI use systems to assess whether their models can bring happiness or pain, and whether they can benefit or harm. We want people to start taking AI seriously by determining their perceptions.
“I’m very concerned that there will be a major social divide over this,” Birch said. “There will be subcultures that will see each other as making big mistakes…one side will see the other as exploiting AI in a very cruel way, and the other side will see the other as deceiving the former into thinking it is sentient. , there is the potential for massive social rifts. “
But he said AI companies “want to be very focused on reliability and profitability…and they’re not just making products, they’re actually creating new forms of conscious beings.” I don’t want to get sidetracked into the debate about whether or not there is.” Although this question is of great interest to philosophers, they have commercial reasons for discounting it. ”
One way to determine how conscious an AI is is to follow the marker systems used to guide policy regarding animals. For example, octopuses are thought to have better sentience than snails and oysters.
Every evaluation effectively tells you whether a chatbot on your phone is actually happy or sad, or whether a robot programmed to do household chores suffers if you don’t treat it properly. I will ask. You will even need to consider whether the automated warehouse system has the ability to feel that it is being interfered with.
Another author, Patrick Butlin, a researcher at Oxford University’s Institute for Global Priorities, said there could be an argument that “AI systems might identify risks that humans might try to resist in ways that are dangerous.” said. Delay AI development until more research into consciousness is done.
“This kind of evaluation of the subconscious mind is not being done at this time,” he says.
Microsoft and Perplexity, two major US companies involved in building AI systems, declined to comment on the academics’ calls to evaluate perceptual models. Meta, Open AI, and Google also did not respond.
Not all experts agree on the impending awareness of AI systems. Anil Seth, a leading neuroscientist and consciousness researcher, said that is “still a long way off and may not be possible at all.” However, even if the possibility is low, it is not wise to completely deny the possibility. ”
He distinguishes between intelligence and consciousness. The former is the ability to do the right thing at the right time, and the latter is not just processing information, but the state in which “our minds are filled with light, color, shadow, and shape.” Feelings, thoughts, beliefs, and intentions are all feelings that are special to us. ”
But AI large-scale language models trained on billions of words of human text are already beginning to show that they can at least be motivated by the concepts of pleasure and pain. When AIs, including Chat GPT-4o, were tasked with maximizing points in a game, researchers found that there was a trade-off between earning more points and “feeling” more pain. If included, AI can accomplish that, another study published last week found.