CVE-2026-40087 Overview
CVE-2026-40087 is an input validation vulnerability in LangChain, a popular framework for building agents and LLM-powered applications. The vulnerability exists in LangChain's f-string prompt-template validation mechanism, which was incomplete in two critical respects. Certain prompt template classes, specifically DictPromptTemplate and ImagePromptTemplate, accepted f-string templates containing attribute access or indexing expressions and evaluated those expressions during formatting without proper validation. Additionally, f-string validation based on parsed top-level field names failed to reject nested replacement fields inside format specifiers, allowing malicious expressions to be resolved at runtime.
Critical Impact
Attackers can bypass prompt template validation to inject malicious expressions that are evaluated during template formatting, potentially leading to information disclosure in LLM-powered applications.
Affected Products
- LangChain versions prior to 0.3.84
- LangChain versions prior to 1.2.28
- Applications using DictPromptTemplate or ImagePromptTemplate with f-string template format
Discovery Timeline
- April 9, 2026 - CVE-2026-40087 published to NVD
- April 9, 2026 - Last updated in NVD database
Technical Details for CVE-2026-40087
Vulnerability Analysis
This vulnerability is classified as CWE-1336 (Improper Neutralization of Special Elements Used in a Template Engine). The root issue lies in LangChain's inconsistent application of f-string template validation across different prompt template classes.
While PromptTemplate enforced attribute-access validation to prevent malicious template expressions, DictPromptTemplate and ImagePromptTemplate bypassed these security controls. These classes would accept templates containing potentially dangerous attribute access or indexing expressions (e.g., {obj.attribute} or {obj[key]}) and evaluate them during the formatting process.
The second aspect of this vulnerability involves nested replacement fields within format specifiers. Python's f-string syntax allows nested fields like {value:{width}}, where the format specifier itself contains a replacement field. LangChain's validation only inspected top-level field names, failing to detect and reject nested expressions that Python's formatting engine would still attempt to resolve at runtime.
Root Cause
The root cause is incomplete input validation in LangChain's prompt template handling. The security controls implemented in PromptTemplate were not consistently applied to DictPromptTemplate and ImagePromptTemplate. Additionally, the f-string parsing logic only examined top-level field names rather than recursively validating the entire template structure, including format specifiers.
Attack Vector
This vulnerability is exploitable over the network with low attack complexity and requires no authentication. An attacker who can influence prompt template content—such as through user-supplied input in an LLM application—could craft malicious templates that bypass validation. When these templates are formatted, the embedded expressions would be evaluated, potentially exposing sensitive object attributes or internal application state.
The attack vector involves:
- Identifying an application using vulnerable LangChain versions with DictPromptTemplate or ImagePromptTemplate
- Crafting a malicious f-string template containing attribute access expressions or nested replacement fields
- Submitting the template to bypass validation checks
- Triggering template formatting to evaluate the injected expressions
# Security patch in libs/core/langchain_core/prompts/dict.py
# Source: https://github.com/langchain-ai/langchain/commit/6bab0ba3c12328008ddca3e0d54ff5a6151cd27b
"""Dict prompt template."""
+from __future__ import annotations
+
import warnings
from functools import cached_property
from typing import Any, Literal, Optional
+from pydantic import model_validator
from typing_extensions import override
from langchain_core.load import dumpd
# Security patch in libs/core/langchain_core/prompts/image.py
# Source: https://github.com/langchain-ai/langchain/commit/6bab0ba3c12328008ddca3e0d54ff5a6151cd27b
from langchain_core.prompts.string import (
DEFAULT_FORMATTER_MAPPING,
PromptTemplateFormat,
+ get_template_variables,
)
from langchain_core.runnables import run_in_executor
Detection Methods for CVE-2026-40087
Indicators of Compromise
- Unusual prompt template strings containing attribute access patterns like {variable.attribute} or indexing expressions {variable[key]}
- Template formatting errors or unexpected attribute resolution in application logs
- Evidence of nested replacement fields in format specifiers within prompt templates
- Attempted access to internal object attributes through template expressions
Detection Strategies
- Monitor application logs for template validation errors or unexpected formatting behavior in LangChain prompt processing
- Implement static code analysis to identify usage of DictPromptTemplate or ImagePromptTemplate with user-controllable input
- Review application code for instances where external input can influence prompt template content
- Deploy runtime monitoring to detect attribute access patterns in template strings that may indicate exploitation attempts
Monitoring Recommendations
- Enable verbose logging for LangChain prompt template processing to capture potential bypass attempts
- Implement input validation at the application layer before passing data to LangChain template classes
- Monitor for anomalous patterns in LLM request logs that could indicate template injection
- Set up alerts for unexpected exceptions during prompt template formatting operations
How to Mitigate CVE-2026-40087
Immediate Actions Required
- Upgrade LangChain to version 0.3.84 or later for the 0.3.x branch
- Upgrade LangChain to version 1.2.28 or later for the 1.x branch
- Audit applications using DictPromptTemplate or ImagePromptTemplate with f-string templates
- Review and restrict user input that can influence prompt template content
Patch Information
LangChain has released security patches in versions 0.3.84 and 1.2.28 that address this vulnerability. The fixes add model_validator decorators and proper template variable sanitization using the get_template_variables function to enforce consistent validation across all prompt template classes.
For detailed patch information, see:
Workarounds
- Avoid using DictPromptTemplate or ImagePromptTemplate with f-string template format until patched versions can be deployed
- Implement strict input validation to reject template strings containing attribute access or indexing patterns before passing to LangChain
- Use allowlist-based validation for template variable names to prevent injection of malicious expressions
- Consider using alternative template formats that do not support attribute access syntax
# Upgrade LangChain to patched versions
pip install --upgrade langchain-core>=0.3.84 # For 0.3.x branch
# OR
pip install --upgrade langchain-core>=1.2.28 # For 1.x branch
# Verify installed version
pip show langchain-core | grep Version
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.

