Three CVEs hit LangChain and LangGraph in a single week: path traversal (CVSS 7.5), unsafe deserialization, and SQL injection (CVSS 7.3)—affecting 52M+ combined weekly downloads. Patch immediately to langchain-core >= 1.2.22 and langgraph-checkpoint-sqlite >= 3.0.1. Then audit your entire AI framework stack: scan dependencies, validate all user inputs before they reach agent tools, sandbox file system access, and treat every LLM-generated query as untrusted.
On March 27, 2026, three CVEs dropped against LangChain and LangGraph within the same week—path traversal, unsafe deserialization, and SQL injection. Combined, these packages see over 52 million weekly downloads. When I saw the disclosures on The Hacker News and TechRadar that morning, I immediately audited two client deployments and found both were running vulnerable versions.
This wasn't a theoretical risk. One client's LangChain-powered document processing system had an internet-facing API that accepted file paths through an agent tool call. The path traversal CVE meant any user who could influence the LLM's tool selection could potentially read arbitrary files from the server—including the .env file with database credentials and API keys.
Here's what happened, how to patch it, and—more importantly—how to audit your entire AI framework stack so you're not blindsided by the next round of disclosures.
What Happened: Three CVEs in One Week
The disclosures came rapid-fire, reported by The Hacker News, TechRadar, and security research firm Cyata on March 27, 2026. Here's the breakdown:
LangChain-Core alone pulls 23 million weekly downloads. LangGraph's checkpoint packages add tens of millions more. The blast radius of these vulnerabilities is enormous—and many teams don't even realize they're running affected versions because these packages are transitive dependencies pulled in by higher-level frameworks.
CVE-2026-34070: Path Traversal (CVSS 7.5)
The path traversal vulnerability lives in langchain-core's document loading pipeline. When an LLM generates a tool call that includes a file path—say, to load a document for RAG retrieval—the framework passes that path to the file system without adequate validation. An attacker who can influence the LLM's output (through prompt injection or direct API manipulation) can craft paths like ../../etc/passwd or ../../app/.env to read arbitrary files. This is particularly dangerous in AI systems because the attack vector isn't a direct HTTP request—it's an LLM-mediated tool call. Traditional web application firewalls (WAFs) don't inspect tool call parameters generated by models. The path traversal payload never appears in the HTTP request body; it's constructed by the LLM and executed server-side.
CVE-2025-68664: Unsafe Deserialization
The deserialization vulnerability allows remote code execution through crafted serialized Python objects. LangChain uses serialization for various internal operations—caching, state persistence, and chain configuration. If an attacker can inject a malicious serialized object into any of these pathways, they achieve arbitrary code execution on the server. Deserialization attacks are a well-known class of vulnerability in Python applications (think pickle.loads on untrusted data), but AI frameworks introduce new entry points. An LLM might receive a base64-encoded payload through a conversation, and if that payload flows into a deserialization path, the attacker gets code execution without ever directly interacting with the server.
CVE-2025-67644: SQL Injection in LangGraph Checkpoints (CVSS 7.3)
LangGraph's SQLite checkpoint store—used by many teams for persisting agent state between runs—was vulnerable to SQL injection through the thread_id parameter. Thread IDs are often derived from user sessions or conversation identifiers, meaning an attacker could craft a malicious thread ID that executes arbitrary SQL when the checkpoint store processes it. The impact goes beyond data exfiltration. LangGraph checkpoints contain the full state of agent execution—tool call history, intermediate results, and potentially sensitive data from previous interactions. A SQL injection here could expose every conversation a particular agent instance has ever processed.
| CVE | Type | CVSS | Affected Package | Patched Version |
|---|---|---|---|---|
| CVE-2026-34070 | Path traversal | 7.5 | langchain-core | >= 1.2.22 |
| CVE-2025-68664 | Unsafe deserialization | — | langchain-core | >= 1.2.22 |
| CVE-2025-67644 | SQL injection | 7.3 | langgraph-checkpoint-sqlite | >= 3.0.1 |
Immediate Remediation: Patch Now
If you're running LangChain or LangGraph in any environment—development, staging, or production—patch immediately:
# Upgrade to patched versions pip install --upgrade langchain-core>=1.2.22 langgraph-checkpoint-sqlite>=3.0.1 # Verify installed versions pip show langchain-core langgraph-checkpoint-sqlite
If you're using a requirements.txt or pyproject.toml, update your version pins:
# pyproject.toml
[project]
dependencies = [
"langchain-core>=1.2.22",
"langgraph-checkpoint-sqlite>=3.0.1",
]For Docker deployments, rebuild your images. For serverless deployments (AWS Lambda, Cloud Functions), redeploy with the updated dependencies. Don't assume your package manager will auto-update—most production deployments pin versions explicitly.
Check for transitive dependencies too. If you're using langchain, langchain-openai, langchain-community, or any LangChain integration package, verify that they're pulling in langchain-core >= 1.2.22:
# Check dependency tree for langchain-core version pip install pipdeptree pipdeptree --packages langchain-core
How to Audit Your AI Framework Stack
Patching these three CVEs is the minimum. The deeper problem is that AI frameworks introduce attack surfaces that most security teams aren't scanning for. Here's a systematic audit process we use at Particula Tech when we assess client deployments.
# Using pip-audit (maintained by PyPA) pip install pip-audit pip-audit # Using safety (from SafetyCLI) pip install safety safety check
# .github/workflows/security-scan.yml
name: Security Scan
on: [push, pull_request]
jobs:
audit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- run: pip install pip-audit
- run: pip-audit --strict --desc# ❌ Dangerous: LLM-controlled file path with no validation
from langchain_community.document_loaders import TextLoader
def load_document(file_path: str):
# file_path comes from an LLM tool call — attacker-controlled
loader = TextLoader(file_path)
return loader.load()
# ✅ Safer: Validate and restrict to allowed directory
import os
ALLOWED_DIR = "/app/documents"
def load_document(file_path: str):
resolved = os.path.realpath(file_path)
if not resolved.startswith(os.path.realpath(ALLOWED_DIR)):
raise ValueError(f"Path {file_path} is outside allowed directory")
loader = TextLoader(resolved)
return loader.load()# ❌ Dangerous: String interpolation in SQL
def get_checkpoint(thread_id: str):
cursor.execute(f"SELECT * FROM checkpoints WHERE thread_id = '{thread_id}'")
# ✅ Safe: Parameterized query
def get_checkpoint(thread_id: str):
cursor.execute("SELECT * FROM checkpoints WHERE thread_id = ?", (thread_id,))# ❌ Dangerous: pickle on untrusted data
import pickle
state = pickle.loads(cached_data)
# ✅ Safe: JSON with explicit schema validation
import json
from pydantic import BaseModel
class AgentState(BaseModel):
thread_id: str
messages: list[dict]
metadata: dict
state = AgentState.model_validate_json(cached_data)from langchain_core.tools import tool
from pydantic import Field, field_validator
import re
@tool
def query_database(
table_name: str = Field(description="Table to query"),
filter_value: str = Field(description="Filter value")
):
"""Query a specific database table with a filter."""
# Validate table name against allowlist
allowed_tables = {"customers", "orders", "products"}
if table_name not in allowed_tables:
raise ValueError(f"Table {table_name} not in allowed list")
# Validate filter value contains no SQL injection patterns
if re.search(r"[;'\"-]", filter_value):
raise ValueError("Invalid characters in filter value")
# Use parameterized query
return db.execute(
f"SELECT * FROM {table_name} WHERE id = ?",
(filter_value,)
)Step 1: Dependency Scanning
Run automated vulnerability scanning across your entire dependency tree: Don't stop at direct dependencies. AI framework stacks are deeply nested—langchain depends on langchain-core, which depends on langchain-text-splitters, which depends on shared utilities. A vulnerability anywhere in the tree is a vulnerability in your application. Set up automated scanning in CI/CD. GitHub's Dependabot, Snyk, and GitLab's dependency scanning all support Python. Configure them to block merges when high-severity CVEs are detected:
Step 2: Audit File System Access
The path traversal CVE (CVE-2026-34070) exposes a systemic issue: AI frameworks often give LLMs direct access to file system operations without adequate sandboxing. Audit every document loader, file reader, and output writer in your LangChain deployment: Apply this pattern everywhere your AI framework touches the file system. Check for:
- Document loaders (PDF, CSV, text, HTML)
- Output file writers (reports, exports)
- Temporary file creation (caching, intermediate processing)
- Model artifact loading (custom model paths)
Step 3: Audit Database Interactions
The SQL injection CVE (CVE-2025-67644) hit LangGraph's checkpoint store, but the same pattern can exist anywhere your AI framework interacts with a database. Check every query that uses values derived from user input, LLM output, or session identifiers: This applies to every database interaction in your stack—not just checkpoint stores. Vector database queries, conversation history stores, user session tables, and analytics logging can all be vulnerable if they incorporate LLM-generated values without parameterization.
Step 4: Audit Serialization Pathways
The deserialization CVE (CVE-2025-68664) is part of a broader class of attacks that are especially dangerous in AI systems. Map every point where your application deserializes data: The fix is straightforward: never deserialize untrusted data with pickle. Use JSON or other safe serialization formats for any data that crosses a trust boundary:
- Cache stores: Redis, disk caches, or memory caches that serialize/deserialize LangChain objects
- State persistence: Agent state, conversation history, or chain configurations saved to disk or database
- Inter-service communication: Messages between microservices that serialize Python objects
- User-provided data: Any pathway where user input could be interpreted as a serialized object
Step 5: Input Validation at the Agent Boundary
The common thread across all three CVEs is that AI frameworks trust inputs they shouldn't. In traditional web applications, you validate HTTP request parameters. In AI applications, you also need to validate the outputs of LLM tool calls before executing them. Think of it this way: every tool call your LLM generates is an untrusted input. The LLM might be following legitimate instructions, or it might be executing a prompt injection attack. Your framework can't tell the difference—so validate everything:
Why AI Frameworks Are Uniquely Vulnerable
These CVEs aren't random bugs—they reveal a structural problem with how AI frameworks are built. Traditional web frameworks evolved over two decades of security research. Django, Rails, and Express all have battle-tested input validation, parameterized queries, and sandboxing built into their core. AI frameworks are younger, moving faster, and solving a fundamentally different problem.
The core issue is the LLM-as-intermediary threat model. In a traditional application, the server processes requests from a known client (browser, mobile app, API consumer). The trust boundary is clear: validate everything from the client, trust internal code. In an AI application, the LLM sits between the user and your backend tools. The LLM processes natural language—which can contain prompt injection payloads—and generates structured tool calls that your framework executes.
This creates a trust gap. Your framework receives tool calls from the LLM and treats them as internal, trusted operations. But the LLM's outputs are influenced by untrusted user input. The path traversal attack works precisely because the document loader trusts the file path from the LLM. The SQL injection works because the checkpoint store trusts the thread ID. The deserialization attack works because the cache trusts the serialized object.
The vLLM project experienced a similar pattern—GitHub issue #34449 documented how untrusted inputs flowing through model-serving infrastructure could compromise the entire pipeline. This isn't a LangChain-specific problem. It's an industry-wide architectural gap.
For a deeper look at how AI systems create fundamentally different attack surfaces than traditional software, read our guide on penetration testing AI systems—the techniques for finding these vulnerabilities require specialized approaches that go beyond standard AppSec tooling.
Security Checklist for AI Framework Deployments
Use this checklist for any production deployment of LangChain, LangGraph, LlamaIndex, or similar AI frameworks:
Dependency Management
- [ ] All framework packages pinned to patched versions
- [ ] Automated vulnerability scanning in CI/CD pipeline
- [ ] Dependency tree audited for transitive vulnerabilities
- [ ] Update cadence established (weekly vulnerability checks minimum)
Input Validation
- [ ] All LLM tool call parameters validated before execution
- [ ] File paths restricted to allowlisted directories
- [ ] Database queries use parameterized statements exclusively
- [ ] User-provided data sanitized before reaching the LLM context
Sandboxing
- [ ] File system access restricted via OS-level controls (chroot, containers)
- [ ] Network access limited to required endpoints only
- [ ] Database credentials scoped to minimum required permissions
- [ ] Agent execution runs in isolated processes or containers
Serialization
- [ ] No
pickle.loadson untrusted data anywhere in the stack - [ ] Cache stores use JSON or other safe formats
- [ ] State persistence validated against explicit schemas
- [ ] Inter-service messages use typed, validated formats
Monitoring
This checklist complements the broader security practices we cover in our guide on securing AI systems handling sensitive data. If you're handling PII, financial data, or healthcare records, you'll need additional controls beyond framework-level security.
- [ ] Anomalous tool call patterns trigger alerts
- [ ] File access outside expected directories logged and flagged
- [ ] Database query patterns monitored for injection signatures
- [ ] Failed validation attempts tracked and correlated
The Broader Pattern: Supply Chain Risk in AI
These LangChain CVEs are part of a larger trend. In January 2026, we documented how OpenClaw's explosive growth created the largest AI agent supply chain attack surface in history—824+ malicious skills, 17,500 exposed instances, and a coordinated campaign distributing infostealers. NVIDIA responded with NemoClaw's enterprise security framework specifically because the open-source AI ecosystem couldn't secure itself fast enough.
The LangChain CVEs fit the same pattern. AI frameworks are the new supply chain risk. They're deeply embedded in production systems, maintained by fast-moving open-source teams, and they handle untrusted inputs in ways their architectures weren't originally designed for. The 52 million weekly downloads for LangChain packages means these vulnerabilities had—and in many unpatched systems, still have—enormous reach.
The question isn't whether your AI framework will have vulnerabilities. It will. The question is whether you'll catch them before attackers do. Automated scanning catches the known CVEs. The audit process above catches the structural patterns that produce CVEs. Both are necessary.
What to Do Next
If you've patched and audited your stack, you're ahead of most teams. But AI framework security is a moving target. New CVEs will drop. New attack patterns will emerge. The teams that stay ahead are the ones that treat AI framework security as an ongoing practice—not a one-time patch.
Three concrete next steps:
pip-audit in a GitHub Action takes 10 minutes to configure and catches 90% of known dependency vulnerabilities.@tool function that touches the file system, database, or external API should validate its parameters against explicit allowlists—not just type-check them./etc/passwd or execute ; DROP TABLE, you have the same class of vulnerability that produced these three CVEs.For organizations running LangChain or LangGraph in production with sensitive data, we offer targeted security audits that go beyond dependency scanning to test for LLM-mediated attacks, prompt injection vulnerabilities, and the structural patterns that produce CVEs. The audit that found my client's exposed .env file took four hours—the breach it prevented would have taken months to remediate.
Frequently Asked Questions
Quick answers to common questions about this topic
Three critical vulnerabilities were disclosed on March 27, 2026: CVE-2026-34070, a path traversal flaw in langchain-core with a CVSS score of 7.5 that allows attackers to read arbitrary files through manipulated document loader paths; CVE-2025-68664, an unsafe deserialization vulnerability that enables remote code execution through crafted serialized objects; and CVE-2025-67644, a SQL injection flaw in langgraph-checkpoint-sqlite with a CVSS score of 7.3 that exposes checkpoint data through malicious thread IDs.



