Nvidia is releasing three new NIM microservices – small, independent services that are part of a larger application – to help enterprises introduce additional controls and safety measures to their AI agents.
One of these new NIM services targets content safety and works to prevent AI agents from producing harmful or biased output. Another works to focus conversations only on approved topics, and a third new service helps prevent AI agents from attempting to jailbreak or lift software restrictions.
These three new NIM microservices are part of Nvidia NeMo Guardrails, Nvidia’s existing open source collection of software tools and microservices aimed at helping enterprises improve their AI applications.
“By applying multiple lightweight, specialized models as guardrails, developers can cover gaps that may appear if only more general global policies and protections exist. “One-size-fits-all approaches cannot adequately protect and control complex agent AI workflows,” the press release said.
AI companies appear to be realizing that getting enterprises to adopt their AI agent technology is not as easy as they initially thought. While people like Salesforce CEO Marc Benioff recently predicted that Salesforce alone would lose more than 1 billion agents within the next 12 months, the reality is probably a little different.
A recent study from Deloitte predicts that around 25% of companies are already using AI agents or plan to use them in 2025. The report also predicts that by 2027, about half of businesses will use agents. This shows that while companies are clearly interested in AI agents, they are not adopting AI technology at the same pace that innovation is happening in the AI space.
Nvidia seems to be hoping that efforts like this will make the deployment of AI agents safer and less experimental. Only time will tell if that is indeed true.