According to a public letter signed by AI practitioners and thinkers, including Stephen the IR, an artificial intelligence system that can be emotional and self -awareness is at risk of being harmful if technology is developed irresponsibly.
We propose five principles for more than 100 experts to conduct responsible research on AI consciousness, in order to raise concerns that rapid progress can be considered sensory.
In principle, to prevent “abuse and suffering”, the priority of research on understanding and evaluation of AIS consciousness is included.
The other principle is to set conscious development of AI systems. Take steps approached to the development of such systems. Share survey results with ordinary people. We refrain from misleading or overconfident statements that are misleading conscious AI creation.
Letters signers include scholars, such as Anthony Finkelstein IR at the University of London, and AI experts from companies such as Amazon and advertising group WPP.
It is open to the public along with a new research paper that outlines the principle. This paper claims that conscious AI systems can build something that gives the impression that they are in the near future or at least conscious.
“It may be true that you can create and suffer from a large number of conscious systems,” says researchers. Polite consideration.
This paper written by Patrick Butrillin at Oxford University and the Athens Economic Business University of Theodoros Slumpa, even companies that are not trying to create conscious systems, have a guideline when “creating conscious entities by mistake” In addition, it is necessary.
Defining the AI system’s awareness has a wide difference in uncertainties and opinions, and it acknowledges whether it is possible, but it is a matter of “ignoring”.
Other questions raised by this paper are focused on if the AI system is defined as “moral patient” or “for itself for themselves” as a moral important entity. Is hitting. In that scenario, it is doubtful whether destroying AI is comparable to killing animals.
This paper published in Journal of Artificial Intelligence Research is the possibility that the incorrect belief that the AI system is already conscious is made by making an incorrect effort to promote welfare, which will lead to wasteful political energy. I warned that there is.
Papers and letters were organized by a research institution, a research institution jointly established by WPP’s highest AI officer, Daniel Halm, provided some WPPs.
Last year, a group of senior scholars argued that by 2035 that some AI systems would be conscious and moral.
In 2023, Demis Hassabis IR, the director of Google’s AI program and winners of the Nobel Prize, stated that AI Systems is not currently “definitely” but may be in the future. I did it.
“The philosopher has not yet settled in the definition of consciousness, but if it means some kind of self -awareness, I think this kind of thing may be possible to AI someday,” he said in the United States. I mentioned in an interview with the Broadcasting Station CBS.