CVE-2025-68664 Overview
A serialization injection vulnerability has been discovered in LangChain's dumps() and dumpd() functions within the langchain_core library. The vulnerability stems from improper handling of dictionaries containing the 'lc' key during serialization operations. Since the 'lc' key is used internally by LangChain to mark serialized objects, user-controlled data containing this key structure is incorrectly treated as a legitimate LangChain object during deserialization rather than plain user data. This insecure deserialization flaw could allow attackers to inject malicious serialized objects that get instantiated during the deserialization process.
Critical Impact
Attackers can exploit this vulnerability by injecting malicious data with specially crafted 'lc' key structures, potentially leading to arbitrary class instantiation and unauthorized code execution in LLM-powered applications built with LangChain.
Affected Products
- langchain_core versions prior to 0.3.81
- langchain_core versions prior to 1.2.5
- Python applications using LangChain's serialization functions (dumps(), dumpd())
Discovery Timeline
- 2025-12-23 - CVE-2025-68664 published to NVD
- 2026-01-13 - Last updated in NVD database
Technical Details for CVE-2025-68664
Vulnerability Analysis
This insecure deserialization vulnerability exists in LangChain's serialization module, specifically within the dumps() and dumpd() functions located in libs/core/langchain_core/load/dump.py. The core issue is that these functions do not properly escape dictionaries containing 'lc' keys when serializing free-form dictionaries provided by users or external sources.
LangChain uses the 'lc' key as an internal marker to identify serialized LangChain objects. When the deserializer encounters this key structure, it attempts to reconstruct the corresponding LangChain class instance. The vulnerability occurs because there is no distinction between legitimately serialized LangChain objects and user-supplied data that happens to contain the same key structure. This confusion allows an attacker to craft malicious input that, when processed through the serialization/deserialization pipeline, could trick the application into instantiating arbitrary classes.
The network-accessible nature of this vulnerability means it can be exploited remotely without authentication, making it particularly dangerous for web-facing LLM applications and AI agents that process untrusted input data.
Root Cause
The root cause is the lack of input validation and escaping in the serialization functions. Prior to the patch, when user-controlled dictionaries were serialized, the functions did not distinguish between internal LangChain serialization markers and user data that coincidentally contained the same structure. This failure to implement an allowlist approach meant that any dictionary with an 'lc' key would be interpreted as a serialized LangChain object, regardless of its origin.
Attack Vector
An attacker can exploit this vulnerability by providing input data containing a malicious dictionary with an 'lc' key structure that mimics LangChain's internal serialization format. When this data flows through the dumps() or dumpd() functions and is subsequently deserialized using loads(), the malicious payload is treated as a legitimate serialized object. This could result in arbitrary class instantiation, potentially leading to remote code execution depending on the classes available in the application's execution context.
The attack is network-based, requiring no user interaction and no prior authentication, making it exploitable in automated pipelines and API endpoints that process external data.
# Security patch from libs/core/langchain_core/load/dump.py
# Source: https://github.com/langchain-ai/langchain/commit/5ec0fa69de31bbe3d76e4cf9cd65a6accb8466c8
# Before (vulnerable):
# """Dump objects to json."""
# After (patched):
# """Serialize LangChain objects to JSON.
#
# Provides `dumps` (to JSON string) and `dumpd` (to dict) for serializing
# `Serializable` objects.
#
# ## Escaping
#
# During serialization, plain dicts (user data) that contain an `'lc'` key are escaped
# by wrapping them: `{"__lc_escaped__": {...original...}}`. This prevents injection
# attacks where malicious data could trick the deserializer into instantiating
# arbitrary classes. The escape marker is removed during deserialization.
#
# This is an allowlist approach: only dicts explicitly produced by
# `Serializable.to_json()` are treated as LC objects; everything else is escaped if it
# could be confused with the LC format.
# """
from langchain_core.load._validation import _serialize_value
from langchain_core.load.serializable import Serializable, to_json_not_implemented
Detection Methods for CVE-2025-68664
Indicators of Compromise
- Unusual JSON payloads containing 'lc' keys in API requests or logs where such data structures are unexpected
- Error messages or stack traces related to unexpected class instantiation during deserialization operations
- Anomalous behavior in LangChain-based applications such as unexpected object creation or execution flow changes
- Network requests containing serialized data with suspicious 'lc', 'type', or 'id' key combinations targeting LangChain endpoints
Detection Strategies
- Implement input validation to detect and flag JSON payloads containing 'lc' keys from untrusted sources before processing
- Monitor application logs for deserialization errors or unexpected class loading attempts in LangChain components
- Deploy runtime application self-protection (RASP) to detect object injection attempts during deserialization
- Use static code analysis tools to identify usage of vulnerable dumps() and dumpd() functions in applications using langchain_core versions prior to 0.3.81 or 1.2.5
Monitoring Recommendations
- Enable detailed logging for all serialization and deserialization operations in LangChain applications
- Set up alerts for API endpoints receiving JSON payloads with unexpected structural patterns matching LangChain's internal serialization format
- Monitor Python application runtime for unexpected module imports or class instantiations that could indicate exploitation attempts
- Implement network-level monitoring for suspicious patterns in requests targeting LLM application endpoints
How to Mitigate CVE-2025-68664
Immediate Actions Required
- Upgrade langchain_core to version 0.3.81 or 1.2.5 immediately, depending on your version branch
- Audit all code paths where user-controlled data may be passed to dumps(), dumpd(), or loads() functions
- Implement input sanitization to strip or escape 'lc' keys from untrusted data before serialization
- Review and restrict deserialization of data from external or untrusted sources until patches are applied
Patch Information
LangChain has released security patches in versions 0.3.81 and 1.2.5 of langchain_core. The fix implements an allowlist approach where plain dictionaries containing 'lc' keys are now escaped by wrapping them with a {"__lc_escaped__": {...original...}} structure during serialization. This prevents injection attacks by ensuring that only dictionaries explicitly produced by Serializable.to_json() are treated as LangChain objects. The escape marker is automatically removed during deserialization.
For detailed information, see the GitHub Security Advisory GHSA-c67j-w6g6-q2cm, Pull Request #34455, and Pull Request #34458.
Workarounds
- Avoid deserializing data from untrusted sources until the patch can be applied
- Implement a custom input sanitization layer that strips or rejects dictionaries containing 'lc' keys before they reach LangChain serialization functions
- Use network segmentation to limit exposure of LangChain-powered applications to untrusted networks
- Deploy web application firewalls (WAF) with custom rules to detect and block payloads containing suspicious 'lc' key patterns
# Upgrade langchain_core to patched version
pip install --upgrade langchain-core>=0.3.81
# Or for version 1.x branch
pip install --upgrade langchain-core>=1.2.5
# Verify installed version
pip show langchain-core | grep Version
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.


