LangChain, LangGraph Flaws Expose Files, Secrets, Databases in Widely Used AI Frameworks
Briefly

LangChain, LangGraph Flaws Expose Files, Secrets, Databases in Widely Used AI Frameworks
"Each vulnerability exposes a different class of enterprise data: filesystem files, environment secrets, and conversation history, according to Cyera security researcher Vladimir Tokarev."
"CVE-2026-34070 is a path traversal vulnerability in LangChain that allows access to arbitrary files without any validation via its prompt-loading API."
"CVE-2025-68664 is a deserialization of untrusted data vulnerability in LangChain that leaks API keys and environment secrets by passing a data structure that tricks the application."
"CVE-2025-67644 is an SQL injection vulnerability in LangGraph that allows an attacker to manipulate SQL queries and run arbitrary SQL queries against the database."
Three security vulnerabilities have been identified in LangChain and LangGraph, which could lead to the exposure of sensitive enterprise data. The vulnerabilities include a path traversal issue, a deserialization flaw that leaks API keys, and an SQL injection vulnerability. Each vulnerability presents a unique method for attackers to access sensitive information. The CVSS scores for these vulnerabilities range from 7.3 to 9.3, indicating their severity. These frameworks are widely used, with millions of downloads, making the vulnerabilities particularly concerning for enterprises utilizing them.
Read at The Hacker News
Unable to calculate read time
[
|
]