CVE-2024-3660 Overview
CVE-2024-3660 is an arbitrary code injection vulnerability affecting TensorFlow's Keras framework in versions prior to 2.13. This vulnerability allows attackers to execute arbitrary code with the same permissions as the application by exploiting malicious machine learning models. The flaw enables code execution irrespective of the intended application behavior, making it particularly dangerous in environments where untrusted models may be loaded.
Critical Impact
Attackers can achieve arbitrary code execution by crafting malicious Keras models, potentially compromising entire machine learning pipelines and underlying infrastructure with full application privileges.
Affected Products
- Keras versions prior to 2.13
- TensorFlow installations using vulnerable Keras components
- Applications loading untrusted or third-party Keras models
Discovery Timeline
- April 16, 2024 - CVE-2024-3660 published to NVD
- September 23, 2025 - Last updated in NVD database
Technical Details for CVE-2024-3660
Vulnerability Analysis
This vulnerability falls under CWE-94 (Improper Control of Generation of Code - Code Injection). The flaw exists in how Keras handles model deserialization, allowing malicious code embedded within model files to execute during the loading process. When an application loads a compromised Keras model, the attacker's payload executes with the full privileges of the host application.
The network-accessible nature of this vulnerability means that applications serving or processing models from external sources—such as model repositories, user uploads, or remote APIs—are particularly at risk. No authentication is required, and no user interaction is necessary beyond the application loading the malicious model.
Root Cause
The root cause stems from insufficient validation and sandboxing during Keras model deserialization. Keras model files (typically in .h5 or SavedModel format) can contain Lambda layers or custom objects that execute arbitrary Python code when the model is loaded. Prior to version 2.13, Keras did not adequately restrict or sanitize these code execution pathways, allowing attackers to embed malicious payloads that execute automatically during keras.models.load_model() or similar operations.
Attack Vector
The attack vector leverages the model loading functionality in Keras applications. An attacker can craft a malicious model file containing embedded code within Lambda layers, custom layer definitions, or serialized Python objects. When a victim application loads this model—whether from a file share, model repository, API endpoint, or user upload—the malicious code executes immediately.
Attack scenarios include:
- Uploading malicious models to public model repositories
- Compromising ML pipelines that process third-party models
- Man-in-the-middle attacks substituting legitimate models with malicious ones
- Social engineering developers to test or evaluate poisoned models
The vulnerability mechanism exploits Keras's Lambda layer functionality and custom object deserialization. When a model containing malicious code is loaded using functions like keras.models.load_model(), the embedded payload executes during the deserialization process. This occurs because Keras relies on Python's serialization mechanisms without adequate sandboxing. For detailed technical analysis, see the CERT Vulnerability Advisory #253266.
Detection Methods for CVE-2024-3660
Indicators of Compromise
- Unexpected process spawning or network connections originating from ML application processes
- Suspicious Lambda layers or custom objects within Keras model files containing encoded or obfuscated code
- Anomalous file system activity during model loading operations
- Unexpected system calls or privilege escalation attempts from Python/TensorFlow processes
Detection Strategies
- Implement static analysis scanning of model files before loading to detect suspicious Lambda layers or custom objects
- Monitor application behavior during model loading operations for unexpected code execution patterns
- Deploy runtime application self-protection (RASP) solutions to detect code injection attempts
- Audit model provenance and implement integrity verification for all loaded models
Monitoring Recommendations
- Enable comprehensive logging for all model loading operations in production environments
- Implement file integrity monitoring on model storage locations
- Configure alerting for unusual process behavior from ML application containers or services
- Monitor network connections initiated by ML workloads for unexpected outbound communications
How to Mitigate CVE-2024-3660
Immediate Actions Required
- Upgrade Keras to version 2.13 or later immediately across all environments
- Audit all currently deployed Keras models for suspicious Lambda layers or custom objects
- Implement model provenance verification to ensure only trusted models are loaded
- Restrict model loading sources to verified, trusted repositories only
Patch Information
The vulnerability is addressed in Keras version 2.13 and later. Organizations should update their TensorFlow/Keras installations to the latest stable version. For environments where immediate upgrades are not feasible, implement the workarounds below and prioritize upgrade planning.
For additional guidance, refer to the CERT Vulnerability Advisory #253266.
Workarounds
- Avoid loading models from untrusted or unverified sources until patched
- Use safe_mode=True parameter when loading models (available in newer Keras versions) to disable Lambda layer execution
- Implement model sandboxing by loading untrusted models in isolated container environments with restricted privileges
- Perform manual code review of model files before deployment, specifically examining Lambda layers and custom objects
# Configuration example
# Upgrade Keras to patched version
pip install --upgrade keras>=2.13
# Verify installed version
python -c "import keras; print(keras.__version__)"
# For TensorFlow integrated environments
pip install --upgrade tensorflow>=2.13
# Container isolation for model loading (Docker example)
docker run --rm --read-only --network none \
-v /path/to/model:/model:ro \
your-ml-image python validate_model.py /model/untrusted.h5
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.

