Nvidia reveals generative AI NPCs that can dream up their own dialogue
A new piece of software, Nvidia Ace, was introduced at the Computex Keynote. The software will allow game developers to implement AI NPCs into their worlds.
Everything in tech is getting generative AI, and Nvidia is at the forefront of its hardware and building tools. Last year, Nvidia announced DLSS 3, which uses AI and various algorithms to generate game frames to alleviate GPU stress. Now, they want to bring generative AI tools to game development.
Nvidia Ace is a new addition to Team Green’s toolset for developers, taking advantage of the massive boom in language models like ChatGPT to generative interactive dialogue. The demo, which shows a player getting a side quest from a concerned ramen shop owner, gives an idealized view of a potential future.
It leverages several pieces of Nvidia tech, along with Epic’s Unreal Engine 5 and Metahumans. Building the world within Unreal 5, and creating the shop owner in the Metahuman generator, all the dialogue is then fed into “Audio2Face”. This Nvidia tech allows dialogue to be automatically and quickly animated onto a lip-sync’d face.
The end result is a rudimentary quest-giving conversation, but supposedly entirely generated with AI based on the player’s questions. Presumably the tech, once in video games, would function similarly to the AI chatbots already available.
Generative AI comes to video game development thanks to Nvidia
Generative AI in video games isn’t new, as Chinese MMO developers have already begun to make games with it and Blizzard has implemented it to begin creating concept art for future titles or World of Warcraft expansions.
Nvidia, however, has been betting on AI for a long time. As well as recently announcing future optimizations for AI on Windows systems running RTX cards, it has assisted in boosting the company close to the trillion-dollar club. The company has also made deals with OpenAI, ChatGPT’s developers, to provide an estimated 30,000 GPUs to help power it.