NVIDIA founder and CEO Jensen Huang opened CES 2025 with a 90-minute keynote that included new products advancing gaming, self-driving cars, robotics, and agent AI.
AI “is advancing at an incredible pace,” he told a crowd of more than 6,000 people at the Michelob Ultra Arena in Las Vegas.
“It started with perceptual AI that understands images, words, and sounds, and then generative AI that creates text, images, and sounds,” Huang said. We are now in the era of physical AI, AI that moves forward, reason, plan, and act.
NVIDIA GPUs and platforms are at the heart of this transformation, Huang explained, enabling breakthroughs across industries such as gaming, robotics, and autonomous vehicles (AV).
Huang’s keynote session showcased how NVIDIA’s latest innovations are enabling this new era of AI and made several groundbreaking announcements, including:
Mr. Huang began his talk by looking back on NVIDIA’s 30-year history. In 1999, NVIDIA invented the programmable GPU. Since then, modern AI has fundamentally changed the way computing works, he said. “In just 12 years, every layer of the technology stack has been transformed. It’s an incredible transformation.”
Revolutionize graphics with GeForce RTX 50 Series
“GeForce brought AI to the masses, and now AI is coming back to GeForce,” said Huang.
We introduced the NVIDIA GeForce RTX 5090 GPU, the most powerful GeForce RTX GPU to date, with 92 billion transistors and 3,352 trillion AI operations per second (TOPS).
“This is our latest GeForce RTX 50 series, the Blackwell architecture,” Huang said, holding aloft the blacked-out GPU and explaining how advanced AI can be leveraged to deliver breakthrough graphics. I paid attention to the crab. “GPUs are a real beast.”
“Even the mechanical design is a miracle,” Huang said, noting that the graphics card has two cooling fans.
More variations are coming to the GPU series. GeForce RTX 5090 and GeForce RTX 5080 desktop GPUs are scheduled to be available on January 30th. The GeForce RTX 5070 Ti and GeForce RTX 5070 desktop will be available starting in February. Laptop GPUs are expected to be available in March.
DLSS 4 introduces multi-frame generation, which works with the full suite of DLSS technologies to improve performance by up to 8x. NVIDIA also announced NVIDIA Reflex 2, which can reduce PC latency by up to 75%.
The latest generation of DLSS can generate three additional frames for every frame it computes, Huang explained. “As a result, the amount of AI computation is significantly reduced, resulting in incredibly high performance rendering.”
RTX neural shaders use small neural networks to improve textures, materials, and lighting for real-time gameplay. RTX Neural Faces and RTX Hair advance real-time facial and hair rendering, using generative AI to animate the most realistic digital characters ever. RTX Mega Geometry increases the number of raytraced triangles by up to 100x, providing more detail.
Advances in physics AI with Cosmos |
In addition to advances in graphics, Huang introduced the NVIDIA Cosmos world-based modeling platform and described it as a game-changer for robotics and industrial AI.
The next frontier in AI is physical AI, Huang explained. He likened this moment to the transformative impact that large-scale language models have on generative AI.
“The ChatGPT moment in general robotics is just around the corner,” he explained.
Like large-scale language models, world-based models are the foundation for advancing robot and AV development, but Huang says not all developers have the expertise and resources to train their own. says.
Cosmos integrates generative models, tokenizers, and video processing pipelines to power physical AI systems such as AVs and robots.
Cosmos aims to bring the foresight and power of multiverse simulation to AI models, allowing them to simulate every possible future and choose the optimal actions.
The Cosmos model takes text, images, or video prompts and generates a video of the state of the virtual world, Huang explained. “The Cosmos generation prioritizes requirements specific to AV and robotics use cases, such as real-world environments, lighting, and object persistence.”
Leading robotics and automotive companies such as 1X, Agile Robots, Agility, Figure AI, Foretellix, Fourier, Galbot, Hillbot, IntBot, Neura Robotics, Skild AI, Virtual Incision, Waabi, XPENG and ride-sharing giant Uber are among the first That’s one thing. Adopt Cosmos.
Additionally, Hyundai Motor Group is adopting NVIDIA AI and Omniverse to develop safer and smarter vehicles, enhance manufacturing, and deploy cutting-edge robotics.
Cosmos is open licensed and available on GitHub.
Empowering developers with AI-based models
Beyond robotics and self-driving cars, NVIDIA provides AI-based models to developers and creators.
Mr. Huang introduced an AI foundation model for RTX PCs that powers digital humans, content creation, productivity, and development.
“NVIDIA GPUs are now available on all clouds, so these AI models will run on all clouds,” Huang said. “This is available to all OEMs, so they can literally take these models, integrate them into their software packages, create AI agents, and deploy them wherever their customers want their software to run.”
These models are delivered as NVIDIA NIM microservices and are accelerated by the new GeForce RTX 50 series GPUs.
GPUs have what you need to run them quickly, adding support for FP4 compute, up to 2x better AI inference, and a smaller memory footprint compared to previous generation hardware. allows you to run generated AI models locally.
Huang explained the potential of new tools for creators: “We’re creating a ton of blueprints that we can tap into the ecosystem, and they’re all completely open source, so you can use that to modify the blueprints.”
Top PC manufacturers and system builders are launching NIM-enabled RTX AI PCs powered by GeForce RTX 50 series GPUs. “An AI PC will come to a house near you,” Huang said.
While these tools bring AI capabilities to personal computing, NVIDIA is also driving AI-driven solutions in the automotive industry, where safety and intelligence are paramount.
Autonomous vehicle innovation
Huang announced the NVIDIA DRIVE Hyperion AV platform built on the new NVIDIA AGX Thor system-on-chip (SoC). The platform is designed for generative AI models and provides advanced functional safety and autonomous driving capabilities.
“The self-driving car revolution is here,” Huang said. “Like any robot, building an autonomous vehicle requires three computers: NVIDIA DGX to train the AI model, Omniverse to perform test drives and generate synthetic data, and DRIVE AGX, the onboard supercomputer. is.”
The first end-to-end AV platform, DRIVE Hyperion integrates advanced SoCs, sensors, safety systems, sensor suites, active safety and Level 2 driving stacks for next-generation vehicles, and is being used by automotive safety pioneers such as Mercedes. has been adopted by the -Benz, JLR, Volvo Cars.
Huang emphasized the important role of synthetic data in the advancement of self-driving cars. He explained that since real-world data is limited, synthetic data is essential to training the data factory for self-driving cars.
Powered by NVIDIA Omniverse AI models and Cosmos, this approach “generates synthetic driving scenarios that enhance training data by orders of magnitude.”
With Omniverse and Cosmos, NVIDIA’s AI data factory can scale “from hundreds of drives to billions of active miles,” dramatically increasing the datasets needed for safe, highly autonomous driving, fans say. says Mr.
“There will be a lot of training data for self-driving cars,” he added.
Toyota, the world’s largest automaker, plans to build its next-generation vehicles based on NVIDIA DRIVE AGX Orin, which runs the safety-certified NVIDIA DriveOS operating system, Huang said.
“Just as computer graphics has revolutionized at an incredible pace, the pace of AV development will also increase significantly in the coming years,” said Huang. These vehicles are functionally safe and offer advanced driver assistance features.
Agenttic AI and digital manufacturing
NVIDIA and its partners have launched an AI Blueprint for Agent AI. It includes PDF-to-podcast conversion for efficient research, video search and summarization for analyzing large amounts of videos and images, and allows developers to build, test, and run AI agents anywhere. You will be able to do it.
AI Blueprints enable developers to deploy custom agents that automate enterprise workflows. This new category of partner blueprints integrates NVIDIA AI enterprise software, including NVIDIA NIM microservices and NVIDIA NeMo, with platforms from leading providers such as CrewAI, Daily, LangChain, LlamaIndex, and Weights & Biases.
In addition, Huang announced the new Llama Nemotron.
Developers can use NVIDIA NIM microservices to build AI agents for tasks such as customer support, fraud detection, and supply chain optimization.
Available as NVIDIA NIM microservices, these models can power AI agents on any high-speed system.
NVIDIA NIM microservices streamline video content management, increasing efficiency and audience engagement in the media industry.
Beyond digital applications, NVIDIA innovation is paving the way for AI to revolutionize the physical world through robotics.
“With all the enabling technologies I’ve been talking about, we’re going to see very rapid advances, incredible advances in robotics in general over the next few years.”
In manufacturing, the NVIDIA Isaac GR00T Blueprint for Synthetic Motion Generation helps developers generate exponentially large synthetic motion data for training humanoids using imitation learning.
Huang emphasized the importance of efficiently training robots using NVIDIA’s Omniverse to generate millions of synthetic motions for humanoid training.
Mega Blueprint enables large-scale simulation of the robotic fleets that leaders like Accenture and KION are employing for warehouse automation.
These AI tools set the stage for NVIDIA’s latest innovation, a personal AI supercomputer called Project DIGITS.
NVIDIA announces Project Digits
To put NVIDIA Grace Blackwell on every desk and at every AI developer’s fingertips, Huang announced NVIDIA Project DIGITS.
“There’s something else I’d like to show you,” Huang said. “None of this would have been possible without this great project we started about 10 years ago. Internally, it was called Project DIGITS (Deep Learning GPU Intelligence Training System).”
Huang highlighted the legacy of NVIDIA’s AI supercomputing journey and shared how the company delivered the first NVIDIA DGX system to OpenAI in 2016. “And obviously it has revolutionized artificial intelligence computing.”
The new project DIGITS takes this mission even further. “Every software engineer, every engineer, every creative artist, everyone who uses computers as a tool today is going to need an AI supercomputer,” Huang said.
Huang revealed that Project DIGITS, powered by the GB10 Grace Blackwell superchip, is NVIDIA’s smallest and most powerful AI supercomputer. “This is NVIDIA’s latest AI supercomputer,” Huang said as he introduced the device. “This runs the entire NVIDIA AI stack, and all NVIDIA software runs on top of this. DGX Cloud runs on top of this.”
Compact yet powerful, Project DIGITS will be available in May.
Year of breakthrough
“This has been a great year,” Huang said as he concluded his keynote address. Huang highlighted NVIDIA’s key achievements, including the Blackwell system, physical AI foundation models, and breakthroughs in agent AI and robotics.
“I want to thank everyone for their partnership,” Huang said.
Check out our software product information notice.