NemoClaw is NVIDIA's open-source security layer for OpenClaw, announced at GTC 2026 on March 16. It wraps OpenClaw in three controls: a kernel-level sandbox (deny-by-default), an out-of-process policy engine that compromised agents cannot override, and a privacy router that keeps sensitive data on local Nemotron models while routing complex reasoning to cloud models. It doesn't replace OpenClaw's application-layer security—it adds OS-level enforcement underneath it. Still in early alpha, no benchmarks yet, and it ties you closer to NVIDIA hardware. But for enterprises that want OpenClaw capabilities with actual governance, it's the first credible answer.
Three days ago, I wrote about OpenClaw's security crisis—the CVEs, the 824+ malicious skills on ClawHub, the government bans. I ended that piece by saying enterprises wanted OpenClaw's capabilities but couldn't justify the risk. On the same day that post went live, Jensen Huang walked onstage at GTC 2026 and announced NVIDIA's answer: NemoClaw.
The timing wasn't coincidental. NemoClaw exists because OpenClaw's security problems weren't bugs to be patched—they were architectural gaps that needed an entirely new layer underneath the agent framework. And NVIDIA, with its hardware ecosystem and open-source Nemotron models, was uniquely positioned to build that layer.
I've spent the last 48 hours reading the source code, architecture docs, and every technical teardown I could find. Here's what NemoClaw actually is, what it gets right, and where the gaps remain.
What NemoClaw Actually Is (And Isn't)
NemoClaw is not a fork of OpenClaw. It's not a competing agent framework. It's a security and governance layer that wraps around OpenClaw—a plugin and a runtime that add enterprise-grade controls without modifying the agent code itself.
Technically, it's two components:
openclaw nemoclaw)It installs in a single command and deploys alongside your existing OpenClaw setup. Think of it as the difference between running a process as root and running it inside a hardened container with mandatory access controls—the process doesn't change, but its blast radius shrinks dramatically.
Harrison Chase, the founder of LangChain, summarized the demand well: "I guarantee that every enterprise developer out there wants to put a safe version of OpenClaw onto their computer. The bottleneck has never been interest. It has been the absence of a credible security and governance layer underneath it."
OpenShell: The Three Pillars of NemoClaw's Security Architecture
The core innovation is OpenShell, NVIDIA's runtime that sits between the agent and the operating system. It has three components, and understanding each one matters because they address fundamentally different attack vectors.
# Example: openclaw-sandbox.yaml baseline policy
network:
allow:
- "api.openai.com:443"
- "api.anthropic.com:443"
- "build.nvidia.com:443"
deny: "*" # Everything else blocked
filesystem:
allow:
- "/workspace/**" # Agent's working directory
- "/tmp/openclaw/**" # Temporary files
deny:
- "/etc/**"
- "$HOME/.ssh/**"
- "$HOME/.env"
- "**/.git/config"
process:
allow:
- "node"
- "python3"
deny:
- "curl" # No direct HTTP from agent
- "wget"
- "nc"
- "ssh"
privilege_escalation: denyThe Sandbox: Deny-By-Default Isolation
OpenShell's sandbox starts every agent session with a strict baseline policy (openclaw-sandbox.yaml) that controls: The critical design choice: deny-by-default. Unlike OpenClaw's native permissions—which are permissive unless you explicitly restrict them—the sandbox blocks everything unless explicitly allowed. An agent can crash, corrupt its own data, or attempt arbitrary execution within its sandbox, and none of it touches the host system. This directly addresses the problems we documented in our OpenClaw security analysis. The ClawHavoc campaign succeeded because malicious skills could execute arbitrary code with the user's full permissions. Under OpenShell, those same skills would hit a wall at the sandbox boundary.
- Network endpoints: Which domains and IPs the agent can reach
- Filesystem paths: Which directories the agent can read from or write to
- Process execution: Which binaries the agent can spawn
- Privilege escalation: Blocked entirely by default
The Policy Engine: Out-of-Process Enforcement
This is where NemoClaw's architecture diverges most sharply from application-layer security approaches. The Policy Engine evaluates every agent action at four levels: binary, destination, method, and path. But the key design decision is that policy enforcement runs out-of-process—it executes outside the agent's address space, in a separate process that the agent cannot access, modify, or terminate. Why this matters: in OpenClaw's native security model, permissions are enforced by the agent framework itself. If a malicious skill compromises the agent process, it can potentially modify its own permission checks. We've seen this pattern in prompt injection attacks where the agent is manipulated into believing it has different permissions than it actually does. With OpenShell's out-of-process enforcement, even a fully compromised agent—one where an attacker has arbitrary code execution within the sandbox—cannot modify the policies constraining it. The policy engine is a separate trust boundary.
The Privacy Router: Data Sovereignty for Agent Inference
The third component solves a problem that enterprises consistently raise when we consult on cloud vs. on-premise AI deployments: how do you use powerful cloud models without sending sensitive data to third-party APIs? NemoClaw's Privacy Router intercepts every inference call the agent makes. Based on user-defined privacy policies, it routes requests to different models: The agent never makes direct outbound API calls. OpenShell mediates every request, strips or redacts sensitive content before cloud routing, and logs every routing decision for audit. Supported local models include: This is a genuinely useful architecture for regulated industries. A healthcare organization could route patient data through a local Nemotron model while using Claude or GPT-5 for general reasoning tasks that don't involve PHI—maintaining HIPAA compliance without sacrificing capability.
- Sensitive context (PII, internal data, proprietary code) → local Nemotron models running on-device
- Complex reasoning (multi-step planning, code generation) → cloud frontier models (OpenAI, Anthropic, Google)
- Nemotron 3 Nano 4B: Runs on consumer RTX GPUs, handles routine classification and extraction
- Nemotron 3 Super 120B: Available via
build.nvidia.com, handles complex tasks that need to stay within NVIDIA's infrastructure - Qwen 3.5 and Mistral Small 4: Optimized for OpenShell but not NVIDIA-native
| Security Layer | OpenClaw Native | NemoClaw + OpenShell |
|---|---|---|
| Enforcement | In-process (agent checks itself) | Out-of-process (separate trust boundary) |
| Default posture | Permissive (allow unless denied) | Restrictive (deny unless allowed) |
| Compromise impact | Full host access | Contained to sandbox |
| Policy modification | Agent can potentially modify | Agent cannot reach policy engine |
| Audit trail | Application logs | Kernel-level audit with tamper protection |
| Live updates | Requires restart | Hot-reload with full audit trail |
NemoClaw vs. OpenClaw vs. NanoClaw: Choosing the Right Stack
NemoClaw isn't the only project trying to solve OpenClaw's security problems. NanoClaw, a minimalist alternative, takes a completely different approach. Here's how they compare:
Choose OpenClaw alone if you're an individual developer experimenting with agent automation and you understand the security risks. Follow the defense patterns we outlined in our security analysis.
Choose NemoClaw if you're an enterprise team that needs audit trails, policy enforcement, and data sovereignty controls. You'll need NVIDIA hardware for the full local-inference story, and you should treat it as evaluation-only until it exits alpha.
Choose NanoClaw if you want the simplest possible agent setup with container-level isolation. It's less secure than NemoClaw's kernel-level approach but far simpler to deploy and doesn't lock you into NVIDIA's ecosystem.
| Aspect | OpenClaw | NemoClaw | NanoClaw |
|---|---|---|---|
| Focus | Agent capabilities | Security and governance | Simplicity and portability |
| Codebase | ~500,000 lines, 70+ deps | Plugin + blueprint on top of OpenClaw | ~500 lines of core logic |
| Security model | Application-layer (API whitelists) | Kernel-level (OS sandboxing) | Container isolation (Docker) |
| Target user | Individual developers, power users | Enterprise teams needing compliance | Developers wanting minimal overhead |
| Hardware | Any | Optimized for NVIDIA (RTX, DGX) | Any (including ARM, legacy) |
| Model support | Multi-vendor | NVIDIA Nemotron + any via Privacy Router | Optimized for Anthropic Claude |
| Installation | Complex (70+ dependencies) | Single command on top of OpenClaw | Single command, minimal deps |
| Governance | Minimal | Full audit trails, policy engine | None built-in |
What NemoClaw Gets Right
Architectural separation of concerns. The agent layer (OpenClaw) handles capabilities. The control layer (NemoClaw/OpenShell) handles governance. Neither modifies the other. This is the same principle behind container orchestration—you don't rewrite applications to run in Kubernetes; you wrap them in a security and management layer.
Out-of-process policy enforcement. This is the single most important design decision. Application-layer security has a fundamental weakness: if the application is compromised, so are its security checks. OpenShell's out-of-process model eliminates this class of attacks entirely.
Privacy-aware routing. Most organizations we work with aren't opposed to cloud models—they're opposed to sending sensitive data to cloud models. The Privacy Router lets them use both local and cloud inference with explicit, auditable routing rules. This is exactly the pattern we recommend when helping clients secure AI systems handling sensitive data.
Where the Gaps Remain
No benchmarks. NVIDIA launched NemoClaw without publishing any performance data. How much latency does OpenShell's interception layer add? What's the throughput impact of routing every inference call through the Privacy Router? For enterprises evaluating this against their SLAs, the absence of benchmarks is a significant gap.
NVIDIA hardware dependency. The full security story—local Nemotron inference, GPU-accelerated sandboxing—requires NVIDIA hardware. Organizations running AMD, Apple Silicon, or CPU-only infrastructure can't use the Privacy Router's local model capabilities. This isn't necessarily a dealbreaker (many enterprises already have NVIDIA GPUs), but it narrows the addressable market and raises vendor lock-in concerns.
Alpha maturity. NemoClaw is early-stage alpha. The codebase hasn't been battle-tested by the community. The OpenShell runtime hasn't gone through independent security audits. For a product whose entire value proposition is security, this is a significant caveat. We'd want to see formal penetration testing results and at least one major security firm's sign-off before recommending it for production.
Complexity budget. OpenClaw already has 500,000 lines of code and 70+ dependencies. NemoClaw adds another layer on top. For teams already struggling with OpenClaw's operational complexity, adding a security shim managed by a separate project with its own update cadence introduces real maintenance burden.
What This Means for Enterprise AI Agent Strategy
NemoClaw validates something we've been telling clients for months: the AI agent security problem can't be solved by the agent frameworks themselves. It requires an independent control plane that operates at a lower level of the stack.
Jensen Huang called OpenClaw "the operating system for personal AI" and compared NemoClaw to "a new renaissance in software." The operating system metaphor is actually apt—NemoClaw is doing for AI agents what SELinux did for Linux processes: adding mandatory access controls that the process itself cannot override.
Three implications for teams evaluating AI agent deployments:
The Bottom Line
NemoClaw is the first credible attempt to solve OpenClaw's enterprise security problem at the right architectural layer. The out-of-process policy enforcement and privacy-aware routing are exactly what regulated industries need. But it's early alpha, unproven at scale, and ties you to NVIDIA's ecosystem.
For now, track the project. Evaluate it in a lab environment. And if you're already running OpenClaw in production—don't wait for NemoClaw to implement the isolation and defense patterns we outlined last week. Those are table stakes regardless of which security layer you eventually adopt.
If your organization is navigating the AI agent security landscape and needs help building the right architecture—whether that includes NemoClaw, container-level isolation, or a custom control plane—we've been through this before. The technology is new, but the security principles are not.
Frequently Asked Questions
Quick answers to common questions about this topic
NemoClaw is an open-source security and governance stack that sits on top of OpenClaw, the viral AI agent platform. Announced at GTC 2026 on March 16, it consists of a TypeScript plugin for the OpenClaw CLI and a Python blueprint that orchestrates NVIDIA's OpenShell runtime. It adds kernel-level sandboxing, out-of-process policy enforcement, and privacy-aware model routing to OpenClaw deployments. It installs in a single command and is available on GitHub.



