Professor Yoshua Bengio at the One Young World Summit in Montreal, Canada, Friday, September 20, 2024
Famous computer scientist Yoshua Bengio, a pioneer in artificial intelligence, has warned of the potential negative effects of this nascent technology on society and called for more research to reduce the risks.
Bengio, a professor at the University of Montreal and director of the Montreal Learning Algorithms Institute, has led several research projects in deep learning, a subset of AI that attempts to mimic human brain activity to learn how to recognize complex objects. has won an award. pattern of data.
But he has concerns about the technology, warning that some people with “great power” may want humans to be replaced by machines.
“It’s very important that we project ourselves into a future where there are machines that are as smart as we are in many ways, and what that means for society,” Bengio said at a conference in Montreal. Speaking to CNBC’s Tania Breyer at the One Young World Summit.
He said machines could soon have most of the cognitive abilities of humans. Artificial general intelligence (AGI) is a type of AI technology that aims to be as intelligent as or better than humans.
“Intelligence gives power. So who controls that power?” he said. “Having a system that knows more than most people can pose a danger in the wrong hands and create further instability at a geopolitical level, for example terrorism.”
According to Bengio, there are only a limited number of organizations and governments that can afford to build powerful AI machines, and the larger the system, the smarter it becomes.
“As you know, building and training these machines costs billions of dollars and there are very few organizations and countries that can do it. That’s already true,” he said. Ta.
“There will be a concentration of power. Economic power can have a negative impact on markets, political power can have a negative impact on democracy, and military power can have a negative impact on global geopolitical stability.” Open questions that we need to study carefully and begin mitigating as soon as possible.”
We don’t have a way to make sure these systems don’t harm people or work against people… We don’t know how to do that.
joshua benzio
Director, Montreal Learning Algorithm Institute
Such an outcome could occur within decades, he said. “But if it takes five years, we’re not ready yet… because there’s no way to make sure these systems don’t harm people or work against people… don’t know how to do it,” he added.
Bengio said there are arguments suggesting that current methods of training AI machines “lead to systems that are hostile to humans.”
“Additionally, there will be people who want to abuse that power, and there will be people who are happy that humanity will be replaced by machines. So it’s a fringe, but these people have a lot of power.” They can’t do that unless we put in place the proper guardrails now,” he said.
AI guidance and regulation
In June, Bengio endorsed an open letter entitled “The right to warn about advanced artificial intelligence.” The document was signed by current and former employees of Open AI, which developed the viral AI chatbot ChatGPT.
The letter warned of “significant risks” from advances in AI and called for guidance from scientists, policymakers and the public to reduce the risks. OpenAI has faced growing safety concerns over the past few months, including the disbandment of its AGI Readiness team in October.
“The first thing the government needs to do is create regulations that force (companies) to register when building the largest frontier systems, which cost hundreds of millions of dollars to train,” Bengio said. told CNBC. “Governments need to understand the details of these systems.”
Because AI is evolving rapidly, Bengio said governments need to be “a little creative” and enact laws that can adapt to changes in technology.
It is never too late to steer society and human evolution in a positive and beneficial direction.
joshua benzio
Director, Montreal Learning Algorithm Institute
According to the computer scientist, companies developing AI must also be held accountable for their actions.
“Responsibility is another tool to force[companies]to do good, because when it comes to money, the fear of being sued can drive companies to act to protect the public. “If they knew that, they wouldn’t necessarily be able to sue because it’s kind of a gray area at this point, but then they wouldn’t necessarily behave well,” he said. “(Companies) are competing with each other and think the first to arrive at AGI will have an advantage. So it’s a race, and it’s a dangerous race.”
Bengio said the legislative process to make AI safe will be similar to how rules are created for other technologies, such as airplanes and cars. “For us to reap the benefits of AI, we have to regulate it. We have to put guardrails in place. We need democratic oversight of how the technology is developed,” he said.
false alarm
As AI develops, there are growing concerns about the spread of misinformation, especially regarding elections. OpenAI announced in October that it had disrupted “more than 20 operations and deceptive networks around the world that attempted to use our models.” These include social posts by fake accounts generated before the US and Rwandan elections.
“One of the biggest near-term concerns is the ability of disinformation, disinformation and AI to influence politics and public opinion, and this will only grow as we move towards more capable systems.” Bengio said. “As we move forward, we will see machines that can produce more realistic images, more realistic voice imitations, and more realistic videos,” he said.
This impact could extend to interactions with chatbots, Bengio said, noting that Italian and Swiss research showed that OpenAI’s GPT-4 large-scale language model can persuade people to change their minds better than humans. mentioned their research. “This is just a scientific study, but you can imagine there are people reading this who are going to do this to disrupt the democratic process,” he said.
“The hardest question of all”
Bengio said the “toughest question” is, “If we create beings who are smarter than us and have their own goals, what does that mean for humanity? Are we in danger? ?”
“These are all very difficult and important questions, and we don’t have all the answers. More research and precautions are needed to reduce the potential risks,” Bengio said. Ta.
He urged people to act. “We have agency. It is not too late to steer society and human evolution in a positive and beneficial direction,” he said. “But for that to happen, you need enough people to understand both the benefits and the risks, and you need enough people to work on the solutions. And the solutions may even be technical.” If there is, it could be political…policy, but we need a lot of effort.’We’re moving in that direction now,” Bengio said.
-CNBC’s Hayden Field and Sam Shead contributed to this report.