CVE-2023-36281 Overview
CVE-2023-36281 is a critical remote code execution vulnerability affecting LangChain version 0.0.171. The flaw exists in the load_prompt function, which improperly handles JSON file inputs containing malicious payloads. An attacker can craft a specially designed JSON file that leverages Python's __subclasses__ mechanism or template injection to achieve arbitrary code execution on systems running vulnerable versions of LangChain.
LangChain is a popular framework for developing applications powered by large language models (LLMs). The widespread adoption of LangChain in AI/ML pipelines makes this vulnerability particularly concerning, as exploitation could lead to complete system compromise in environments processing untrusted prompt files.
Critical Impact
Remote attackers can execute arbitrary code without authentication by supplying a malicious JSON file to the load_prompt function, potentially leading to full system compromise in AI/ML application environments.
Affected Products
- LangChain version 0.0.171
- Applications using load_prompt functionality with untrusted JSON inputs
- AI/ML pipelines that dynamically load prompt configurations from external sources
Discovery Timeline
- 2023-08-22 - CVE-2023-36281 published to NVD
- 2024-11-21 - Last updated in NVD database
Technical Details for CVE-2023-36281
Vulnerability Analysis
This vulnerability is classified as Code Injection (CWE-94) and allows remote code execution through the improper handling of JSON prompt files. The load_prompt function in LangChain version 0.0.171 fails to adequately sanitize and validate input data before processing, enabling attackers to inject malicious Python code through carefully crafted JSON payloads.
The vulnerability can be exploited through two primary mechanisms: abuse of Python's __subclasses__ introspection feature or template injection. Both approaches allow an attacker to escape the intended context of prompt loading and execute arbitrary Python code with the same privileges as the LangChain application.
The network-accessible nature of this vulnerability, combined with the lack of authentication requirements, makes it trivially exploitable in environments where LangChain processes prompt files from untrusted sources. Successful exploitation grants attackers the ability to read sensitive data, modify application behavior, or establish persistent access to compromised systems.
Root Cause
The root cause of CVE-2023-36281 lies in insufficient input validation within the load_prompt function. The function processes JSON files containing prompt configurations but does not properly sanitize inputs that could contain Python code injection payloads. Specifically, the deserialization process allows access to dangerous Python introspection features like __subclasses__ which can be abused to instantiate arbitrary classes and execute malicious code. Additionally, template rendering within the prompt loading mechanism lacks proper sandboxing, allowing template injection attacks.
Attack Vector
The attack vector for CVE-2023-36281 is network-based, requiring no user interaction or authentication. An attacker can exploit this vulnerability by:
- Crafting a malicious JSON file containing either __subclasses__ exploitation payload or template injection syntax
- Delivering the malicious JSON file to a vulnerable LangChain application through any mechanism that triggers load_prompt processing
- The vulnerable function processes the JSON without proper validation, executing the embedded malicious code
- The attacker gains code execution with the privileges of the LangChain application process
The vulnerability exploits Python's dynamic nature and the trust placed in JSON prompt configuration files. Since many LangChain deployments may load prompts from external sources, APIs, or user-provided inputs, the attack surface can be significant in real-world environments.
For detailed technical information about this vulnerability, refer to the GitHub Issue #4394 and the Aisec Article on LangChain.
Detection Methods for CVE-2023-36281
Indicators of Compromise
- Unexpected JSON files containing __subclasses__, __globals__, __builtins__, or similar Python introspection patterns being processed by LangChain applications
- Anomalous process spawning or network connections originating from LangChain application processes
- Log entries showing errors or unusual behavior during prompt loading operations
- Presence of template injection patterns such as {{ }} or {% %} with system commands in prompt files
Detection Strategies
- Implement file integrity monitoring on directories where LangChain prompt files are stored
- Deploy application-level logging to capture all load_prompt function calls and their input sources
- Use runtime application self-protection (RASP) solutions to detect code injection attempts
- Monitor for suspicious Python object access patterns in application logs
Monitoring Recommendations
- Enable verbose logging for LangChain applications to capture prompt loading activities
- Implement network traffic analysis to detect data exfiltration following potential exploitation
- Set up alerts for any process execution or file system modifications initiated by LangChain processes
- Review and audit all external sources from which prompt files are loaded
How to Mitigate CVE-2023-36281
Immediate Actions Required
- Upgrade LangChain to version 0.0.312 or later immediately
- Audit all applications using load_prompt functionality to identify potential exposure
- Restrict prompt file loading to trusted, validated sources only
- Implement network segmentation to limit the impact of potential exploitation
Patch Information
LangChain has released version 0.0.312 which addresses this vulnerability. Organizations should upgrade to this version or later as soon as possible. The patch can be obtained from the GitHub LangChain Release v0.0.312.
For environments where immediate patching is not possible, implement the workarounds described below while planning upgrade activities.
Workarounds
- Avoid using load_prompt with untrusted or user-supplied JSON files until patching is complete
- Implement strict input validation to reject JSON files containing suspicious patterns like __subclasses__ or template syntax
- Run LangChain applications in sandboxed environments with minimal privileges
- Use allowlisting to permit only known-safe prompt file sources
# Upgrade LangChain to patched version
pip install --upgrade langchain>=0.0.312
# Verify installed version
pip show langchain | grep Version
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.


