CVE-2025-49655 Overview
CVE-2025-49655 is a critical insecure deserialization vulnerability in the Keras deep learning framework that allows attackers to execute arbitrary code on target systems. The vulnerability exists in Keras versions 3.11.0 through 3.11.2, where a maliciously crafted Keras model file containing a TorchModuleWrapper class can bypass safe mode protections and execute arbitrary code when loaded by an unsuspecting user. This vulnerability can be triggered through both local and remote model files, making it particularly dangerous in environments where machine learning models are shared or downloaded from external sources.
Critical Impact
This vulnerability enables remote code execution through malicious Keras model files, bypassing safe mode protections and potentially compromising systems that load untrusted machine learning models.
Affected Products
- Keras framework versions 3.11.0 to 3.11.2
- Applications and services that load Keras model files from untrusted sources
- Machine learning pipelines that download and deserialize external model files
Discovery Timeline
- 2025-10-17 - CVE-2025-49655 published to NVD
- 2025-10-21 - Last updated in NVD database
Technical Details for CVE-2025-49655
Vulnerability Analysis
This vulnerability is classified as CWE-502 (Deserialization of Untrusted Data), a well-known class of security issues that occurs when applications deserialize data from untrusted sources without proper validation. In the context of Keras, the framework provides a "safe mode" feature intended to prevent code execution when loading model files. However, this implementation contains a critical flaw that allows the TorchModuleWrapper class to bypass these safety mechanisms.
When a Keras model file is loaded, the framework deserializes the stored objects to reconstruct the model architecture. The TorchModuleWrapper class, which provides interoperability between Keras and PyTorch, was not properly restricted under safe mode, creating an exploitation pathway. An attacker can craft a malicious .keras model file that embeds arbitrary Python code within a TorchModuleWrapper object, which gets executed during the deserialization process regardless of safe mode settings.
Root Cause
The root cause of this vulnerability lies in the incomplete implementation of safe mode restrictions in the Keras model loading functionality. The TorchModuleWrapper class was inadvertently excluded from the allowlist/blocklist mechanism that safe mode uses to prevent dangerous object deserialization. This oversight allows malicious payloads embedded within TorchModuleWrapper instances to execute during model loading, completely circumventing the security controls that users reasonably expect when enabling safe mode.
Attack Vector
The attack can be executed through network-based delivery of malicious model files. An attacker would craft a Keras model file containing a weaponized TorchModuleWrapper object with embedded malicious code. This file could be distributed through:
- Model sharing platforms and repositories
- Compromised machine learning model hubs
- Supply chain attacks targeting ML pipelines
- Phishing attacks targeting data scientists and ML engineers
- Man-in-the-middle attacks on model download operations
When a victim loads the malicious model file using Keras (even with safe mode enabled), the embedded code executes with the privileges of the user running the application. This can lead to complete system compromise, data exfiltration, or further lateral movement within an organization's infrastructure.
The attack requires no authentication and no user interaction beyond loading the malicious model file, making it highly exploitable in automated ML pipeline environments where models are frequently downloaded and processed from external sources.
Detection Methods for CVE-2025-49655
Indicators of Compromise
- Unexpected network connections or data exfiltration attempts following Keras model loading operations
- Unusual process spawning or command execution originating from Python/Keras processes
- Suspicious .keras or .h5 model files with unexpected TorchModuleWrapper configurations
- Anomalous file system activity after model deserialization in ML pipelines
- Unexpected system modifications or persistence mechanisms created by ML application processes
Detection Strategies
- Monitor Keras model loading operations for files containing TorchModuleWrapper classes from untrusted sources
- Implement file integrity monitoring on model storage directories and ML artifact repositories
- Deploy endpoint detection rules to identify suspicious process trees spawned from Python ML applications
- Enable application logging for all model loading operations with source tracking
- Use static analysis tools to scan incoming model files for potentially malicious embedded code
Monitoring Recommendations
- Implement network segmentation for ML pipeline infrastructure to limit blast radius of potential compromises
- Enable verbose logging for Keras model loading operations in production environments
- Monitor for unusual Python process behavior including unexpected subprocess creation or network activity
- Establish baseline behavior for ML applications and alert on deviations following model loading events
- Deploy SentinelOne agents on systems running Keras to detect post-exploitation activity
How to Mitigate CVE-2025-49655
Immediate Actions Required
- Upgrade Keras to version 3.11.3 or later immediately on all affected systems
- Audit all Keras model files currently in use for unexpected or suspicious TorchModuleWrapper content
- Restrict model loading to trusted, verified sources until patching is complete
- Implement network-level controls to prevent downloading models from untrusted external sources
- Review and enhance input validation for any ML pipelines that accept external model files
Patch Information
The Keras development team has addressed this vulnerability in version 3.11.3. The fix implements proper restrictions on the TorchModuleWrapper class when safe mode is enabled, preventing arbitrary code execution during deserialization. Organizations should upgrade to Keras 3.11.3 or later as soon as possible.
For detailed information about the fix, refer to the Keras Pull Request #21575 and the HiddenLayer Security Advisory.
Workarounds
- Avoid loading Keras model files from untrusted or unverified sources until the patch is applied
- Implement strict model file validation and scanning before loading in production environments
- Use containerized or sandboxed environments for loading models from external sources to limit potential impact
- Apply network-level controls to restrict outbound connections from ML pipeline infrastructure
- Consider implementing model signing and verification mechanisms to ensure model integrity
# Upgrade Keras to patched version
pip install --upgrade keras>=3.11.3
# Verify installed version
python -c "import keras; print(keras.__version__)"
# For conda environments
conda install -c conda-forge keras>=3.11.3
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.

