Whether you think artificial intelligence will save the world or end it, there is no doubt that we are in a moment of great excitement. AI as we know it might not exist without Yoshua Bengio.
Nicknamed the “godfather of artificial intelligence,” Bengio, 60, is a Canadian computer scientist who has dedicated his career to researching neural networks and deep learning algorithms. His pioneering work led the way to the AI models we use today, such as OpenAI’s ChatGPT. And Claude from Anthropic.
“Intelligence gives power, and whoever controls that power — if it’s on a human level or above — will be very powerful,” Bengio said in an interview with Yahoo Finance. “Technology is typically used by people seeking more power, whether it’s economic control, military control, or political control. So before you develop a technology that has the potential to centralize power in dangerous ways, , we have to be very careful.”
In 2018, Bengio and two colleagues, former Google (GOOG) vice president Jeffrey Hinton (winner of the 2024 Nobel Prize in Physics) and META chief AI scientist Yann LeCun, won the Turing Award (also known as Turing Award). Winner of the Nobel Computing Prize. In 2022, Bengio became the most cited computer scientist in the world. And Time magazine named him one of the 100 most influential people in the world.
Even though Bengio helped invent the technology, he is now a voice of caution in the world of AI. This sense of caution comes as investors continue to show great enthusiasm for the space this year, setting new records for AI stocks.
For example, the stock price of AI chip darling Nvidia (NVDA) is up 172% since the beginning of the year, while the S&P 500 stock price (^GSPC) is up 21%.
The company’s valuation now stands at a staggering $3.25 trillion, according to Yahoo Finance data, just behind Apple (AAPL) for the title of world’s most valuable company.
I spoke to Bengio about the potential threats of AI and which tech companies are doing a good job of addressing them.
The interview has been edited for length and clarity.
Yasmin Hollam: Why should we be concerned about human-level artificial intelligence?
Yoshua Bengio: If this falls into the wrong hands, whatever that means, it could be very dangerous. These tools could quickly help terrorists, and they could also help state actors seeking to subvert democracy. And there is a problem that many scientists have pointed out. This is how we currently train scientists. We do not know clearly how to avoid these systems becoming autonomous and having their own conservation goals. You lose control of these systems. Therefore, we are on our way to creating monsters that could possibly be more powerful than us.
story continues
OpenAI, Meta, Google, Amazon — which AI giants are getting it right?
Ethically, I think the company doing the best is Anthropic (anthropic’s major investors include Amazon (AMZN) and Google (GOOG)). However, I think every company has a bias because of the economic structure where survival depends on being part of a top-tier company and ideally being the first to reach AGI (artificial general intelligence). And that means competition, an arms race among companies where public safety is likely to be compromised.
Humans show many signs that they are very concerned about avoiding catastrophic outcomes. They first proposed a safety policy with a promise to halt efforts if AI developed potentially dangerous capabilities. Along with Elon Musk, they are also the only people supporting SB 1047. In other words, say, “Yes, I agree to make some improvements and be more transparent about safety procedures and results.” and liability if we cause significant damage. ”
What do you think about the massive rally in AI stocks like Nvidia?
What I think is very certain is the long-term trajectory. So if you’re in this for the long term, it’s a pretty safe bet. Unless we fail to protect our people…(then) that reaction could cause everything to collapse, right? Either there will be a societal backlash against AI in general, or something truly catastrophic will occur and the economic structure will collapse.
Either way, it will be negative for investors. Therefore, if investors are wise, I think they will understand that we need to tread carefully and avoid mistakes and catastrophes that can collectively damage our future.
Thinking about the AI chip race?
Chips are clearly becoming a key piece of the puzzle, and of course I think that’s becoming a bottleneck. Equipped with high-end AI chip capabilities has strategic value, as the need for massive amounts of computation is unlikely to disappear with any kind of scientific advancement or event I can imagine in the coming years. And every step of the supply chain will be important. There are very few companies that can do that right now, so I’m hoping that there will be more investment and that we can diversify a little bit.
What do you think about Salesforce deploying 1 billion autonomous agents by 2026?
Autonomy is one of the goals of these companies, and it makes good economic sense. Commercially, this will be a huge step forward in terms of the number of applications it will open. Think about all the personal assistant applications. We need far more autonomy than current state-of-the-art systems can provide. So it’s understandable that they would aspire to something like this. The fact that Salesforce (CRM) thinks it can achieve this goal within two years is concerning to me. Before that happens, we need to put in place guardrails, both governmental and technological.
Governor Newsom vetoed California’s SB 1047. Was that a mistake?
He didn’t give any reason that made sense to me, such as wanting to regulate all the small systems, not just the big ones. …Things can move quickly – we talked about years. And even if it’s a small possibility, like 10% (chance of disaster), you need to be prepared. Regulation is necessary. Companies should already be making efforts to document their efforts in a consistent manner across industries.
Second, companies were worried about lawsuits. I’ve talked to many of these companies, and tort law already exists, so lawsuits can always be filed if harm is caused. And what this bill was trying to do in terms of liability is reduce the scope of litigation. …There were 10 conditions. All of these conditions must be met for the law to support your case. So I think it was actually helpful. But there is ideological resistance to any kind of involvement that is not the case, any further involvement of the state in the affairs of these AI labs.
Yasmin Khorram is a senior reporter at Yahoo Finance. Follow Yasmin on Twitter/X @YasminKhorram and LinkedIn. Send newsworthy tips to Yasmin: yasmin.khorram@yahooinc.com
Click here for the latest technology news impacting the stock market.
Read the latest financial and business news from Yahoo Finance