CVE-2026-34070 Overview
CVE-2026-34070 is a path traversal vulnerability affecting LangChain, a popular framework for building agents and LLM-powered applications. The vulnerability exists in multiple functions within langchain_core.prompts.loading that read files from paths embedded in deserialized configuration dictionaries without properly validating against directory traversal sequences or absolute path injection attacks.
When an application passes user-influenced prompt configurations to load_prompt() or load_prompt_from_config(), an attacker can read arbitrary files on the host filesystem. The attack is constrained only by file-extension checks—.txt for templates and .json/.yaml for examples—but this still allows extraction of sensitive configuration files, credentials, and other critical data.
Critical Impact
Attackers can read arbitrary files from the host filesystem through path traversal attacks, potentially exposing sensitive credentials, API keys, and configuration data used in LLM-powered applications.
Affected Products
- LangChain versions prior to 1.2.22
- Applications using langchain_core.prompts.loading module
- Systems that pass user-influenced configurations to prompt loading functions
Discovery Timeline
- 2026-03-31 - CVE-2026-34070 published to NVD
- 2026-04-02 - Last updated in NVD database
Technical Details for CVE-2026-34070
Vulnerability Analysis
This vulnerability stems from improper input validation in LangChain's prompt loading functionality (CWE-22: Path Traversal). The affected functions within langchain_core.prompts.loading accept file paths from deserialized configuration dictionaries and use them to read file contents without adequate security checks.
While the implementation includes file extension validation (restricting reads to .txt, .json, and .yaml files), it fails to sanitize path components that could allow directory traversal. An attacker can craft malicious configuration inputs containing sequences like ../ or absolute paths to escape the intended directory context and access files elsewhere on the filesystem.
The network-accessible nature of this vulnerability means it can be exploited remotely without authentication when applications expose prompt loading functionality to user inputs. The confidentiality impact is significant as attackers can potentially access sensitive system files, application configurations, environment variables, and credentials stored in supported file formats.
Root Cause
The root cause is insufficient path validation in the prompt loading functions. The code reads file paths from configuration dictionaries that may originate from untrusted sources but fails to implement proper path canonicalization or directory containment checks. While extension-based filtering exists, it does not prevent directory traversal attacks that could reach sensitive files with permitted extensions (e.g., /etc/passwd.txt symlinks or configuration files like secrets.json).
Attack Vector
The attack requires an application that passes user-controlled or user-influenced prompt configurations to the vulnerable load_prompt() or load_prompt_from_config() functions. An attacker can inject malicious path values in the configuration dictionary, using directory traversal sequences to escape intended directories. For example, a template path containing ../../../../etc/sensitive.txt could read files outside the application's prompt directory.
The vulnerability is particularly dangerous in multi-tenant LLM applications, API services that accept prompt configurations, or any system where users can influence the prompt loading process. Successful exploitation results in arbitrary file read, which can lead to exposure of API keys, database credentials, environment configurations, and other sensitive data.
Detection Methods for CVE-2026-34070
Indicators of Compromise
- Unusual file access patterns in application logs involving prompt template directories
- Log entries showing file paths containing directory traversal sequences (../, ..\\)
- Application errors related to file not found for paths outside expected directories
- Unexpected access to sensitive configuration files from LangChain processes
Detection Strategies
- Monitor file system access logs for LangChain applications accessing files outside designated prompt directories
- Implement application-level logging for all load_prompt() and load_prompt_from_config() calls
- Deploy Web Application Firewall (WAF) rules to detect path traversal patterns in API requests
- Review application inputs for configuration dictionaries containing suspicious path values
Monitoring Recommendations
- Enable verbose logging for file I/O operations in LangChain applications
- Set up alerts for access attempts to sensitive directories from LLM application processes
- Monitor for error patterns indicating blocked file access attempts
- Implement file integrity monitoring on sensitive configuration files
How to Mitigate CVE-2026-34070
Immediate Actions Required
- Upgrade LangChain to version 1.2.22 or later immediately
- Audit applications using langchain_core.prompts.loading for user-influenced inputs
- Implement input validation on any user-controlled prompt configurations
- Restrict file system permissions for LangChain application processes
Patch Information
The vulnerability has been patched in LangChain version 1.2.22. The fix implements proper path validation to prevent directory traversal and absolute path injection attacks. Users should update their installations using pip:
pip install --upgrade langchain-core>=1.2.22
For more details, refer to the GitHub Security Advisory and the patch commit.
Workarounds
- Avoid passing user-influenced configurations to prompt loading functions
- Implement allowlist-based path validation before calling load_prompt() functions
- Use chroot or container isolation to limit file system access for LangChain applications
- Apply strict input sanitization to remove directory traversal sequences from any user input
# Verify installed version and upgrade if needed
pip show langchain-core | grep Version
pip install --upgrade langchain-core>=1.2.22
# For containerized deployments, rebuild with patched version
# Example Dockerfile modification:
# RUN pip install langchain-core>=1.2.22
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.


