Nvidia was well-positioned for the AI boom due to its track record in gaming and mastery of GPUs. The next market the company will corner is advanced robotics that could be replaced by humanoids. The technical hurdles could be a reality check for Jensen Huang’s future in robotics.
Jensen Huang, wearing his signature black leather jacket, stretched out his arms and gestured to the humanoid robots at his sides, as the audience applauded. “It’s about my size,” he joked on stage at Computex 2024 in Taipei, Taiwan in June.
“Robotics is here. Physical AI is here. This is not science fiction,” he said. However, the robot was flat and generated on a giant screen. Appearing on stage were machines with wheels that looked like delivery robots.
Robots are a big part of Huang’s vision for the future, which is shared by other tech luminaries including Elon Musk. In addition to Computex displays, Nvidia’s two recent earnings calls also featured humanoid robots.
Most analysts agree that NVIDIA’s fate is largely sealed for several years. Demand for graphics processing units drove the market capitalization to $3 trillion in a matter of days. But the semiconductor industry is cruel. Data center investments, which account for 87% of Nvidia’s revenue, come in booms and busts. Nvidia needs another big market.
At Computex, Huang said there will be two “high-volume” robotic products in the future. The first will likely be self-driving cars, and the second will likely be humanoid robots. Thanks to machine learning, technologies are converging.
Both machines require human-like awareness of rapidly changing environments and instantaneous reactions with little margin for error. Both also require a huge amount of what Huang is touting: AI computing power. But robotics represents only a small portion of Nvidia’s revenue today. And it’s not just a matter of time to grow it.
For Nvidia’s position in the technology stratosphere to be permanent, Huang needs the market for robotics to grow. Nvidia’s story over the past few years has been one of amazing engineering, foresight, and timing, but the challenge of making robots a reality could be even tougher.
How can Nvidia introduce robots?
Artificial intelligence offers great potential for robotics. But expanding the field means making engineering and architecture more accessible.
“Robot AI is the most complex because the large language model is software, but robots are a mechanical engineering problem, a software problem, and a physics problem. It’s much more complex.” Data Center Landlord said Raul Martinec, CEO. Databank said:
Most of the people working in robotics are experts with PhDs in robotics. The same was true of language-based AI 10 years ago. Today, you don’t need a Ph.D. to build an AI application because the underlying models and the computing to support them are widely available.
The layers of software and vast language and image libraries aim to increase user obsession, lower the barrier to entry, and allow just about anyone to build with AI.
Nvidia’s robot stack needs to do something similar, but since it’s harder to use AI in physical space, it’s also harder to make it work for regular people.
Using the Nvidia Robot Stack requires some interaction. It’s full of platforms, libraries, and names.
Omniverse is a simulation platform. Provides a virtual world that developers can use to customize and test robot simulations. Isaac calls Nvidia a “gym” built on top of Omniverse. This is a way to practice tasks by placing the robot in an environment.
Related articles
Jetson Thor is Nvidia’s chip for powering robots. Project Groot, which the company calls the “moonshot” concept, is a basic model for humanoid robots. In July, the company launched a synthetic data generation service and Osmo, the software layer that ties it all together.
Huang often says that humanoids are easier to create because the world is already created for humans.
“The most adaptable robots in the world are humanoids, because we created the world for us,” he said at Computex, adding, “Because we have the same physique, , we have more data to train these robots.”
Collecting data about how we move still takes time, effort, and money. Tesla, for example, pays people $48 an hour to wear special suits to train its humanoid robot, Optimus.
“That’s the biggest problem in robotics,” said Sofia Verastegui, an AI expert who has worked at Apple, Google and Microsoft. Is it necessary?”
But analysts see potential. Analysts at research firm William Blair recently wrote that “NVIDIA’s capabilities in robotics and digital twins (with Omniverse) have the potential to scale up into large-scale businesses in their own right.” . Analysts said they expect Nvidia’s automotive business to grow 20% annually through 2027.
Nvidia announced that BMW is using Isaac and Omniverse to train factory robots. Boston Dynamics, BYD Electronics, Figure, Intrinsic, Siemens, and Teradyne Robotics are using Nvidia’s stack to build robotic arms, humanoids, and other robots.
But three robotics experts told Business Insider that so far, Nvidia hasn’t been able to lower the barrier to entry for would-be robot builders like it has with language- and image-based AI. He says there is no. Before Nvidia takes over, competitors are starting to step in to pioneer the ideal stack for robotics.
“We recognize that developing AI that can interact with the physical world is extremely difficult,” an Nvidia spokesperson told BI in an email. “That’s why we developed an entire platform to help companies train and deploy robots.”
The company launched a humanoid robot developer program in July. After successfully submitting an application, developers will have access to all these tools.
Nvidia can’t do it alone
Ashish Kapoor is keenly aware of all the progress yet to be made in this field. He led Microsoft’s robotics research division for 17 years. There, he helped develop AirSim, a computer vision simulation platform that launched in 2017 but was discontinued last year.
Mr. Kapoor left with Shutdown to create his own platform. Last year, he founded Scaled Foundations and launched Grid, a robot development platform designed for aspiring robot builders.
Related articles
No company can solve difficult robotics problems alone, he said.
“I’ve seen it happen with AI, but the real solutions come when communities work on things together,” Kapur said. “That’s when the magic starts to happen, and this needs to happen in robotics now.”
Kapur said he feels like every aspiring humanoid player is doing it for himself. But there’s a reason there’s a graveyard of robotics startups. Robots appear in real-world scenarios, but that’s not enough. Customers give up before they can improve.
“A common joke is that every robot has a team of 10 people trying to make it work,” Kapur said.
Grid offers a free tier or a managed service that offers more help. Scaled Foundations builds its own foundation models for robotics, and encourages users to develop them too.
Some elements of Nvidia’s robot stack are open source. And while Huang likes to say that Nvidia works with every robotics and AI company on the planet, some developers believe the giant company protects its own success first and then protects its ecosystem. I am concerned that it may support .
“They’re doing the Apple effect. To me, they’re trying to lock users into their ecosystem as much as possible,” said Jonathan Stevens, chief developer advocate at Everypoint, a computer vision company. he says.
An Nvidia spokesperson told BI that this perception is inaccurate. The company is “working with most of the leading companies in the robotics and humanoid developer ecosystem” to help quickly deploy robots. “Our success comes from our ecosystem,” they said.
Scaled Foundations and Nvidia aren’t the only companies working on foundational models for robotics. Skild AI raised $300 million in July to build its version.
What makes a humanoid?
Simulators are an important stop on the road to humanoid robots, but they do not necessarily lead to human-like perception.
When discussing the robotic arm at Computex, Huang said Nvidia provided the “computers, acceleration layers, and pre-trained AI models” needed to bring AI robots into AI factories. The goal of using robotic arms on a large scale in factories has been around for decades. Robot Arm has been building cars since 1961. But what Huang was talking about was AI robots, or intelligent robots.
The art of building a car has very little intelligence. They are programmed to perform repetitive tasks and often use sensors instead of cameras to “see”.
An AI-powered robotic arm will be able to handle a variety of tasks, perhaps even while on the move, such as picking up different items and placing them in different locations without breaking them. Must be able to recognize objects and guardrails and navigate in a consistent order. However, humanoid robots are quite different from the most useful non-humanoid robots. Some roboticists question whether that’s the right target to aim for.
“I’m very skeptical,” said a former Nvidia robotics expert with more than 15 years of experience in the field, who was granted anonymity to protect industry relationships. “The cost of creating a humanoid robot and making it versatile is that if you create a robot that doesn’t look like a human and can only perform a single task, but that performs that task well and quickly, It will be higher than that.”
But Huang is doing his best.
“I think Jensen is obsessed with robots because at the end of the day, what he’s trying to do is create the future,” Martinek said.
Self-driving cars and robotics are a big part of Nvidia’s future. The company told BI that it expects everything will eventually become autonomous, starting with robotic arms and vehicles, all the way to buildings and cities.
“I was at Apple when we developed the iPad, which was inspired by Star Trek and the futuristic worlds in the movies,” Verastegui said, adding that robotics can tap into our imaginations. He added that there is.