CVE-2025-65106 Overview
CVE-2025-65106 is a template injection vulnerability in LangChain, a framework for building agents and large language model (LLM) applications. The flaw resides in the prompt template system, specifically ChatPromptTemplate and related classes. Attackers can exploit template syntax to access Python object internals when applications accept untrusted template strings, not just template variables. The vulnerability affects LangChain versions 0.3.79 and prior, and 1.0.0 through 1.0.6. Maintainers patched the issue in versions 0.3.80 and 1.0.7. The weakness is categorized as [CWE-1336] (Improper Neutralization of Special Elements Used in a Template Engine).
Critical Impact
Attackers supplying crafted template strings can traverse Python object hierarchies, potentially exposing sensitive runtime data or escalating to broader code execution within LangChain-powered agents.
Affected Products
- LangChain versions 0.3.79 and prior
- LangChain versions 1.0.0 through 1.0.6
- Applications using ChatPromptTemplate and related prompt template classes with untrusted input
Discovery Timeline
- 2025-11-21 - CVE-2025-65106 published to NVD
- 2026-04-15 - Last updated in NVD database
Technical Details for CVE-2025-65106
Vulnerability Analysis
LangChain's prompt template system renders template strings using Python's formatting machinery. When applications pass attacker-controlled strings as templates rather than as template variables, the formatter evaluates embedded syntax against caller-supplied objects. This evaluation exposes attribute access and indexing operations on Python objects passed into the renderer.
The vulnerability is reachable when developers build prompts dynamically from user input, configuration files, or third-party content. An attacker who controls the template body can craft expressions that walk object attributes, including dunder attributes such as __class__, __globals__, or __builtins__. This pattern enables disclosure of internal state and, depending on the runtime context, can be chained to read sensitive variables held by the application.
Root Cause
The root cause is improper neutralization of special elements in the template engine. The prompt template implementation treated the template string itself as trusted, applying Python format-style substitution without restricting access to object internals. Variable-only sanitization is insufficient when the template body itself is attacker-controlled.
Attack Vector
Exploitation occurs over the network when an application accepts template strings from untrusted sources and passes them to LangChain's prompt builders. Common entry points include API endpoints, chatbot configuration interfaces, agent tool definitions, and stored prompt templates loaded from external storage. No authentication is required when the vulnerable endpoint is publicly reachable. See the GitHub Security Advisory GHSA-6qv9-48xg-fc7f for technical details on the vulnerable code paths.
Detection Methods for CVE-2025-65106
Indicators of Compromise
- Prompt inputs containing dunder attribute references such as {0.__class__}, __globals__, __subclasses__, or __builtins__.
- Anomalous error traces from LangChain prompt rendering paths referencing ChatPromptTemplate, PromptTemplate, or format methods.
- Outbound LLM payloads or logs containing strings that resemble Python object representations rather than natural-language prompts.
Detection Strategies
- Inspect application logs for prompt strings that include format specifiers, brace expressions, or attribute traversal patterns.
- Audit code paths that pass user-supplied data to ChatPromptTemplate.from_template, PromptTemplate.from_template, or equivalent constructors.
- Use software composition analysis to flag LangChain installations at versions 0.3.79 or earlier, and 1.0.0 through 1.0.6.
Monitoring Recommendations
- Alert on requests to LLM-facing endpoints containing brace-delimited expressions or Python dunder attributes.
- Monitor for unusual ImportError, AttributeError, or KeyError exceptions originating from LangChain template rendering.
- Track upstream LLM provider request payloads for content that suggests successful introspection of application internals.
How to Mitigate CVE-2025-65106
Immediate Actions Required
- Upgrade LangChain to version 0.3.80 or 1.0.7 immediately across all environments using the framework.
- Audit application code to confirm that template strings are sourced only from trusted, developer-controlled locations.
- Treat any field that becomes a template body as a sensitive control surface and apply strict allowlisting.
Patch Information
The LangChain maintainers released fixes in versions 0.3.80 and 1.0.7. The patches are tracked in commits c4b6ba254e1a49ed91f2e268e6484011c540542a and fa7789d6c21222b85211755d822ef698d3b34e00. Refer to the GitHub Security Advisory GHSA-6qv9-48xg-fc7f for the official advisory.
Workarounds
- Refactor applications so user input is supplied only as template variables, never as the template string itself.
- Validate or reject any template input containing brace characters, format specifiers, or dunder attribute names before rendering.
- Isolate LangChain agents in sandboxed processes with minimal filesystem and environment variable exposure to reduce blast radius.
# Configuration example
pip install --upgrade "langchain>=1.0.7"
# Or for the 0.3.x branch
pip install --upgrade "langchain>=0.3.80,<1.0.0"
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.


