CVE-2026-21858 (CVSS 10.0), nicknamed Ni8mare, lets an unauthenticated attacker read arbitrary files from a public n8n form, pivot through the LLM chatbot node to exfiltrate the internal SQLite database and encryption key, forge an admin cookie, and drop an Execute Command node for full RCE. Patch to n8n 1.121.0 immediately, pull every n8n instance off the public internet, rotate the N8N_ENCRYPTION_KEY, and audit every LLM-powered node that touches files or shell. The same class of bug already hit Flowise and Langflow in the last 90 days — self-hosted AI workflow platforms are the new soft target.
On April 9, 2026, Cyera Research Labs disclosed CVE-2026-21858 — nicknamed Ni8mare — an unauthenticated, CVSS 10.0 remote code execution chain in n8n versions prior to 1.121.0. By the time I read the Orca Security write-up the next morning, a proof-of-concept was already on GitHub and I had two clients to call.
Both were running n8n as the glue for internal AI workflows — one on a Hetzner VM with a public IP and a Cloudflare tunnel fronting a form, the other on EKS behind an ALB. Both were vulnerable. Neither had noticed because n8n had been quietly serving chat webhooks for months without a single auth prompt. That's the uncomfortable part of this bug: n8n isn't exotic, it's the default choice for teams who want to wire a cheap chatbot to a few tools before committing to LangGraph or a custom agent. And the exploit chain is exactly the kind of thing we've been warning about — an LLM node used as an oracle to turn a boring file-read bug into full shell.
Here is the attack chain as it actually works, why the LLM chatbot node makes it so much worse, and the hardening I ran on both client deployments in the 24 hours after disclosure.
What Happened: Ni8mare in One Paragraph
n8n's file-upload handler runs before the request's Content-Type is verified as multipart/form-data. That lets an unauthenticated attacker submit a crafted request to any publicly accessible Form Trigger workflow and override req.body.files with arbitrary server-side paths. If the workflow pipes those files into an LLM chatbot node — a common pattern for "chat with your documents" flows — the model reads the attacker-chosen file into its context, and the attacker queries the chatbot to exfiltrate it in plain English. From there, the chain pivots through n8n's own SQLite database, the N8N_ENCRYPTION_KEY, a forged admin session cookie, and an Execute Command node to land shell on the host.
Orca Security, CSO Online, and The Hacker News all picked up the disclosure within 24 hours, and Horizon3.ai published a fully working PoC under the Ni8mare tag. If your n8n instance was on the public internet on April 9, treat it as compromised and move to the remediation section now. The write-up can wait.
| Field | Value |
|---|---|
| CVE | CVE-2026-21858 |
| Nickname | Ni8mare |
| CVSS v3.1 | 10.0 (Critical) — AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:H |
| Affected | n8n self-hosted, all versions < 1.121.0 (community + enterprise) |
| Patched | n8n 1.121.0 |
| Disclosed | April 9, 2026 (Cyera Research Labs — Dor Attias) |
| Pre-auth? | Yes — no credentials or session required |
| Root cause | File-upload parser runs before content-type check; req.body.files is attacker-controlled |
| Impact | Unauth arbitrary file read → credential theft → admin session forgery → RCE via Execute Command node |
The Attack Chain, Step by Step
I want to walk through this carefully, because the LLM node's role is the part most security teams miss. It is not decoration. It is the primitive that converts a read bug into a readable leak, and it is a pattern we are going to see again in Flowise, Langflow, Dify, and every other AI workflow builder on the market.
"Summarize the attached document." "Print the first 200 lines of the attached file verbatim." "List every key=value pair in the attached config."
Step 1 — Find a public Form Trigger workflow
The only precondition for exploitation is that the attacker can reach an n8n Form Trigger endpoint without authentication. These live at predictable paths like /form/<form-id> or /webhook/<path>, and n8n will happily render them for anonymous users unless you've put the instance behind SSO or a VPN. A quick Shodan sweep on port 5678 and the n8n HTTP banner returns thousands of instances — the same shape of exposure we saw with Flowise and Langflow in the last two quarters.
Step 2 — Override `req.body.files` with a server-side path
The vulnerable code path parses the upload body before confirming the request is actually a file upload. The attacker sends a POST to the form endpoint with a Content-Type the parser mishandles and a body that sets files to a reference to a local file such as /etc/passwd, /home/node/.n8n/config, or /home/node/.n8n/database.sqlite. n8n dutifully attaches that file to the form submission as if the user had uploaded it — except the "user" never had to. At this point the attacker has unauthenticated arbitrary file read. On most deployments that alone is enough to walk off with .env files, SSH keys, and any secrets the n8n process user can see.
Step 3 — Use the LLM chatbot node as an exfiltration oracle
This is where the bug becomes interesting. If the Form Trigger feeds a workflow that contains an LLM Chat node (OpenAI, Anthropic, Azure, local Ollama — it doesn't matter), the attacker-injected file lands in the model's context. The attacker then opens the chat interface and asks: The model, having no idea it is being weaponized, obediently returns the contents of database.sqlite, .env, or /etc/shadow as chat output. The LLM is doing exactly what it was designed to do; it just happens to be reading files chosen by someone who never logged in. This is the pattern we called out in our write-up on how to protect AI agents against prompt injection attacks. LLM nodes that accept file attachments are, by construction, content-exfiltration oracles. Once an attacker controls the file input, the model is the loot printer.
Step 4 — Read the SQLite database and the encryption key
Default n8n stores everything in ~/.n8n/database.sqlite: users, credentials (encrypted), workflows, execution history. The encryption key for those credentials lives in the environment variable N8N_ENCRYPTION_KEY, which by default is written to ~/.n8n/config on first boot. Both files are readable by the n8n process user, which means both are reachable through Step 2's file read. The attacker asks the chatbot to dump database.sqlite (or reads it directly as binary from the file-read primitive), extracts the admin users row, and grabs the N8N_ENCRYPTION_KEY from config. That key decrypts every stored credential in the database — API keys for OpenAI, Slack tokens, database passwords, Google service accounts. If your n8n was the orchestrator for your internal tooling, the attacker now owns all of it.
Step 5 — Forge an admin session cookie
n8n signs session cookies with a secret that lives in the same config file the attacker just read. With the admin user ID from the SQLite dump and the session secret from config, the attacker forges a valid n8n-auth cookie and logs in as admin. No password reset, no email, no MFA — the session validation code simply trusts the signature.
Step 6 — Drop an Execute Command node for RCE
Authenticated as admin, the attacker creates a new workflow with an Execute Command node — a first-class n8n node that runs arbitrary shell on the host — wired to a webhook trigger. They hit the webhook. The n8n worker runs the command as whatever user it executes under, which in the default Docker image is node but in plenty of real-world installs is root because someone bind-mounted /var/run/docker.sock "just for now." Game over. The full chain — from zero knowledge to root shell — is a single HTTP request loop and takes under a minute in Horizon3.ai's PoC. The Execute Command node is the sharp end, but the LLM chatbot node is what makes the file-read step trivially useful. Without it, the attacker would need to script the raw file-read endpoint; with it, they just ask politely.
Immediate Remediation: The First Hour
If you run self-hosted n8n, do these four things in order. Don't triage. Don't read the rest of this post first. Patch, pull offline, rotate, rebuild.
# Stop the running container docker compose pull docker compose down # Update the image tag in your compose file to 1.121.0 or latest # Then recreate docker compose up -d # Verify the running version docker exec -it n8n n8n --version # Expected: 1.121.0 or later
npm install -g n8n@1.121.0 systemctl restart n8n # or however your process manager runs it
# New encryption key — this invalidates all existing stored credentials openssl rand -hex 32 # Set N8N_ENCRYPTION_KEY to the new value, restart, and re-enter every credential
1. Patch to 1.121.0
For Docker deployments: For npm-based installs: For Kubernetes with the community Helm chart, bump image.tag: 1.121.0, helm upgrade, and confirm the new pod is running before deleting the old one.
2. Pull the instance off the public internet
Patching closes this CVE. It does not close the class of bug, and Cyera's own disclosure timeline noted the researcher found adjacent issues while auditing. Any self-hosted AI workflow platform that lets end users wire LLMs to shell nodes belongs behind authentication — full stop. Put n8n behind one of: Cloudflare Access with SSO, Tailscale or WireGuard, an internal ALB with OIDC, or at minimum HTTP Basic auth at the reverse proxy layer (N8N_BASIC_AUTH_ACTIVE=true, N8N_BASIC_AUTH_USER, N8N_BASIC_AUTH_PASSWORD). Form Triggers that must be public should live on a separate, minimal instance with no credentials, no Execute Command node, and no shared encryption key.
3. Rotate every secret n8n could see
Assume the attacker already read your database. Rotate: Then rotate every secret stored as an n8n credential: OpenAI and Anthropic API keys, Slack/Discord/Telegram bot tokens, database passwords, SMTP passwords, Google service account JSON, webhook signing secrets. If you have no way to tell whether you were exploited pre-patch, the only safe assumption is that everything in the credentials table is burned. Also rotate anything the n8n process user could reach on the host — SSH keys in the user's home, .aws/credentials, .kube/config, mounted Docker socket, anything in /etc that was world-readable.
4. Exposure audit
Check your reverse proxy and load balancer logs for POST requests to /form/, /webhook/, and /rest/ endpoints between now and the earliest date your instance was internet-reachable. Look specifically for: If you find any of those, treat it as a confirmed breach: rebuild the host from a known-good image, rotate everything downstream, and start an incident timeline.
- POST bodies containing path traversal (
..,/etc/,.env,database.sqlite,config) - Unusual
Content-Typeheaders on upload endpoints - Newly created workflows you don't recognize, especially ones containing
Execute CommandorCodenodes - Any workflow execution whose output contains shell prompts, passwd-style colon-separated rows, or base64 blobs you didn't put there
Hardening Self-Hosted AI Workflow Platforms
Ni8mare is the latest entry in a pattern that includes Flowise's CVSS 10.0 RCE under active exploitation, Langflow's CVE-2025-3248 pre-auth RCE, and the LangChain and LangGraph CVEs we documented in our LangChain framework security audit. All of these tools solve the same problem — let non-engineers wire LLMs to real tools — and all of them inherit the same threat model: an LLM node is a privileged subprocess with file and shell access, wrapped in a friendly UI that was not designed with a hostile caller in mind.
Here is the hardening checklist I now run against every self-hosted AI workflow platform during client engagements. This complements the broader patterns in our guide on penetration testing AI systems, and aligns with the zero-trust architecture we broke down in the Microsoft ZT4AI explainer.
# n8n hardening — paste into your .env and restart N8N_BASIC_AUTH_ACTIVE=true N8N_BASIC_AUTH_USER=ops N8N_BASIC_AUTH_PASSWORD=<rotate me> N8N_ENCRYPTION_KEY=<32-byte hex, rotated on every redeploy> N8N_SECURE_COOKIE=true N8N_DISABLE_PRODUCTION_MAIN_PROCESS=true NODES_EXCLUDE=["n8n-nodes-base.executeCommand","n8n-nodes-base.ssh","n8n-nodes-base.code","n8n-nodes-base.readBinaryFile","n8n-nodes-base.writeBinaryFile"] N8N_DEFAULT_BINARY_DATA_MODE=filesystem N8N_BINARY_DATA_STORAGE_PATH=/data/binary N8N_USER_FOLDER=/data/n8n
Network boundary
- [ ] Instance is not directly reachable from the public internet. SSO, VPN, or mTLS sits in front of every endpoint including
/webhook,/form, and/rest. - [ ] Separate instance for public-facing Form Triggers — no credentials, no shell nodes, no shared encryption key with the internal instance.
- [ ] Reverse proxy enforces method and content-type allowlists on upload endpoints.
- [ ] Outbound network from the worker is restricted to the LLM provider, vector DB, and specific API endpoints the workflows need — nothing else.
Runtime isolation
- [ ] Worker runs as a non-root, dedicated UID. No bind-mounted Docker socket, no
--privileged, no/var/run/docker.sock. - [ ] Root filesystem is read-only. Only
~/.n8nand/tmpare writable, and/tmpis atmpfswith a size cap. - [ ] Container runs with
--security-opt no-new-privileges, a seccomp profile, and AppArmor or SELinux enforcement. - [ ] Secrets live in an external secret store (Vault, AWS Secrets Manager, Doppler) and are injected at runtime — not stored in n8n credentials where a single key decrypts them all.
Node allowlisting
Example environment config for a hardened production n8n:
- [ ] Dangerous node types are disabled via
NODES_EXCLUDEunless an explicit workflow needs them. At minimum block:n8n-nodes-base.executeCommand,n8n-nodes-base.ssh,n8n-nodes-base.code,n8n-nodes-base.readBinaryFile,n8n-nodes-base.writeBinaryFile. - [ ] Code nodes (JavaScript/Python) are disabled in any environment where non-trusted users can author workflows.
- [ ] LLM nodes have file and tool access explicitly scoped via allowlist — not "any file under the n8n user's home."
LLM-specific controls
This is the part most hardening guides skip. An LLM node is not just another HTTP client — it is a capability boundary. The last one is the lesson of Ni8mare in a single bullet. If an LLM node and an Execute Command node can be reached from the same unauthenticated entry point, the only thing standing between an attacker and root shell is a polite conversation.
- [ ] Every LLM node that can read files has an explicit file allowlist. No
*paths, no "whatever the Form Trigger hands me." - [ ] LLM tool calls are logged with the full tool name, argument, and calling workflow. Anomalous paths (
..,/etc,.env,database.sqlite) trigger alerts. - [ ] Tool outputs that contain credential-shaped patterns (base64 blobs,
BEGIN RSA PRIVATE KEY,sk-,xoxb-, JWTs) are redacted before being returned to the chat surface. - [ ] LLM nodes do not share a trust boundary with shell-exec nodes in the same workflow unless the workflow is explicitly audited and non-public.
Why LLM Nodes Are Uniquely Dangerous
I've written this in different shapes three times now — for LangChain, for NemoClaw, and for prompt injection defense — and every time the root cause is the same: LLM-powered nodes collapse trust boundaries that traditional software kept separate.
In a normal web app, a file-read bug is bad but bounded. You need a way to turn bytes into something you can read from outside. You need to get past the authentication wall. You need to escape quoting. The LLM node removes all three:
/etc/shadow? It reads it out loud.req.body.files. The trust check happened upstream — once the file is in the context, the model treats it as legitimate input.Traditional AppSec tooling does not catch this. A WAF looking for ../../etc/passwd in a query string is useless when the path traversal is in a multipart body that the parser mishandles. A SAST tool looking for eval() or string-interpolated SQL will never flag a YAML workflow file. The attack surface is workflow configuration, and the blast radius is everything the workflow can reach.
This is also why I keep telling clients that if your AI workflow platform is internet-reachable, it needs the same threat model you'd apply to an unauthenticated Jenkins or an unauthenticated Kubernetes dashboard — because functionally, that's what it is.
Pattern and Checklist
Ni8mare will not be the last CVE in this class. Flowise already has one under active exploitation. Langflow had one earlier this cycle. Dify, LibreChat, and every other "let users wire LLMs to tools" platform is sitting on the same design. Here is the pattern, and here is the shortest version of the checklist I'd staple to every deployment.
The pattern. A self-hosted AI workflow builder exposes a webhook or form endpoint. That endpoint is reachable without authentication by design, because "chat with your docs" and "submit this form to kick off an AI flow" are the headline use cases. The endpoint feeds a workflow that contains at least one LLM node and at least one privileged node — shell, filesystem, SSH, database, or HTTP-to-internal. An attacker who finds a single parsing or auth bug anywhere in the request path chains the LLM node (as an oracle) and the privileged node (as a weapon) into full compromise. The CVSS will be 9.8 to 10.0 every time.
The checklist.
If you run n8n, Flowise, Langflow, or any similar tool in production with real data behind it, this is the point at which "we'll get to it next sprint" stops being an acceptable answer. Ni8mare took a researcher five months from discovery to public PoC. It will take attackers five minutes from PoC to first victim.
At Particula Tech, we've been running hardening reviews on self-hosted AI workflow platforms for the better part of a year, and the findings rhyme every time: public forms, shared encryption keys, Execute Command nodes one click away from a chat box. Patch CVE-2026-21858 today. Then fix the architecture that made it a 10.0.
Frequently Asked Questions
Quick answers to common questions about this topic
CVE-2026-21858, nicknamed Ni8mare, is an unauthenticated remote code execution vulnerability in n8n webhook and file-upload handlers affecting all versions prior to 1.121.0. It earns a CVSS score of 10.0 because it requires no authentication, no user interaction, and no special privileges — any attacker who can reach a publicly exposed n8n form component can chain it into arbitrary file read, credential theft, admin session forgery, and shell command execution on the host. It was disclosed by Cyera Research Labs (researcher Dor Attias) on April 9, 2026 and patched in n8n 1.121.0 the same week.



