RTX 50-series GPUs may have been the star of this CES, but AI is Nvidia’s real bread and butter. In Nvidia’s perfect world, PCs and games would all be built into AI running directly on the PC, without cloud processing. If there’s one thing AI can do, it’s to stop it from talking so much.
Instead, what I saw when I toured Nvidia’s private demo suite was a rough first draft of an AI concept. Some of the planned models may make sense with further tweaking. The company is working on a text-to-body motion framework for developers, but we’ll have to see how professional animators react to the tool. Additional AI lip-syncing and “autonomous enemies” are thought to transform typical boss fights that utilize learned and predictable patterns into something more spontaneous for both enemies and players.
Generative AI created to revamp the classic NPC and enemy AI in games doesn’t really stick. For example, consider “Ally,” the AI that Nvidia helped create the battle royale game PUBG: Battlegrounds. This was essentially a silly trailer for your typical cooperative AI, but one that could give commands by voice.
I asked him to speak like a pirate. The AI replied: “Ah, the life of a pirate. Let’s just hope we don’t end up walking the plank. Follow me, I’ll find the buggy in a minute.” An unblinking NPC who interrupts and yells at you every few seconds. Trying to ascribe some kind of personality to an AI that speaks to you rather than your friend is the worst. I found that the AI takes quite a while to acquire weapons. When I was inevitably shot and crawled up to him, begging to be picked up, the bot fired blankly at the enemy, then fled into a neighbor’s house. I survived because the developers provided invincibility if you got shot during the demo.
PUBG Ally is the same experience as playing co-op with bots, except they always speak to you in complete sentences. Crafton is also using Nvidia technology to create inZOI, a Sims-type life simulator that embeds chatbots in characters’ heads. Even though the AI was supposed to plan and make the life choices of the characters in the game, it still looked like a boring version of The Sims and lacked much of the charm of those games. I see a vision. It’s just not there yet.
We previously tried Nvidia’s ACE, or its Autonomous Gaming NPC, at the last CES and in late 2024. According to previous demos, the generated voice lines are incredibly stiff and sound like you’re reading a Cyberpunk parody book. And the cliché of the detective genre. This time, Nvidia did not provide an update for AI NPCs. Instead, the company showed off another demo of an upcoming game called ZooPunk. It seems to be about a rebel rabbit who carries two laser swords on his back. How can I make it lame? By adding scratch AI voice lines to your character.
In our demo, Nvidia asked an in-game character to change the color of a spaceship from beige to purple. “No problem. Let’s get started,” the AI replied. Instead of purple, the ship is now orange. On the second try, the AI was able to find the right end of the color spectrum. So Nvidia asked them to change the ship’s decal to a narwhal fighting a unicorn. Instead, the AI-generated image looked like a narwhal acting out the spaghetti scene from “Lady and the Tramp” with a gelding with ball-shaped horns.
The real showcase of Nvidia’s AI was in its desktop companion app. The first is G-Assist, a chatbot that can be used to automatically manipulate Nvidia app settings and calculate the best in-game settings for your PC’s specs. The chatbot can also display graphs of GPU and CPU performance levels over time and must be run directly from your PC rather than from the cloud. I think it’s better to be able to adjust settings than to have to turn knobs yourself. Still, at least this is an interesting use of AI.
But AI is more than just interesting. Must be wild. Otherwise who cares? There was a demo of an AI head made for streamers that was shockingly rude to users. But the real star was the talking mannequin head that sat like a gargoyle on my home desktop screen. It uses the company’s neural faces, lip sync, text-to-speech, and skin models to try to create something “realistic.” In my estimation, it still lands on the wrong side of the uncanny valley.
The idea is that you can talk to it, similar to Copilot on Windows, but you can also drop files on your forehead for the AI to read and spit out information. Nvidia demonstrated this using an old PDF of the booklet that came with Doom 3 (as if playing the game on your PC was as easy as buying a disc and installing it). It seems like a reminder). Based on the booklet, we may be able to provide an overview of the game’s story.
You can add a face to an AI, but it’s still just a text-generating machine that can’t understand what you’re saying. Nvidia said it could add G-Assist to its animation assistant, but it doesn’t make it any faster or more reliable. It will still be a big black hole of AI occupying the bottom of your desktop, staring back at you with lifeless eyes and a vacant smile.