NVIDIA has announced the Agent Toolkit, an open-source suite of tools designed to help enterprises and developers build autonomous AI agents. The main goal is to address concerns about security, data control, and liability when deploying these agents in enterprise environments.

OpenShell: Security and Privacy

The centerpiece of the toolkit is NVIDIA OpenShell, an open-source runtime that enforces policy-based security and privacy guardrails for autonomous agents. NVIDIA defines individual agents as "claws" and OpenShell as the mechanism to keep them in check. NVIDIA is working with companies like Cisco, CrowdStrike, Google, and Microsoft Security to integrate OpenShell compatibility into their security tools.

Research and Costs

The toolkit also includes NVIDIA AI-Q, an agentic search blueprint built with LangChain. It uses a hybrid architecture where frontier models handle orchestration, while NVIDIA's Nemotron models handle the more complex research tasks. According to NVIDIA, this approach can cut query costs by more than 50% while maintaining high accuracy.

Partners

Partners adopting the toolkit include Adobe, Atlassian, SAP, Salesforce, ServiceNow, and Siemens. Salesforce is developing a reference architecture where employees use Slack as the orchestration layer for Agentforce agents, leveraging data in both on-premises and cloud environments. Siemens has launched the Fuse EDA AI Agent, which uses NVIDIA Nemotron to autonomously orchestrate workflows in its electronic design automation suite.

The Bigger Shift

NVIDIA is positioning itself as the software infrastructure layer for enterprise agentic deployment. The Agent Toolkit, OpenShell, Nemotron models, and AI-Q are components of a stack that NVIDIA wants to sit underneath enterprise software. The toolkit is available now on build.nvidia.com, with support in AWS, Google Cloud, Microsoft Azure, and Oracle Cloud Infrastructure.

For those evaluating on-premise deployments, there are trade-offs to consider carefully. AI-RADAR offers analytical frameworks on /llm-onpremise to evaluate these aspects.