CVE-2026-1462 Overview
A critical insecure deserialization vulnerability has been identified in the TFSMLayer class of the Keras deep learning library, version 3.13.0. This vulnerability allows attackers to bypass the security guarantees of safe_mode=True when loading .keras model files, enabling arbitrary code execution during model inference under the victim's privileges.
The flaw stems from the unconditional loading of external TensorFlow SavedModels during deserialization, combined with the serialization of attacker-controlled file paths and insufficient validation in the from_config() method. This creates a dangerous attack surface where malicious model files can execute arbitrary code when loaded by unsuspecting users.
Critical Impact
Attackers can craft malicious .keras model files that execute arbitrary code during model inference, completely bypassing Keras safe_mode protections and compromising the victim's system.
Affected Products
- Keras version 3.13.0
- Applications using the TFSMLayer class for model deserialization
- Machine learning pipelines loading untrusted .keras model files
Discovery Timeline
- 2026-04-13 - CVE CVE-2026-1462 published to NVD
- 2026-04-13 - Last updated in NVD database
Technical Details for CVE-2026-1462
Vulnerability Analysis
This vulnerability is classified as CWE-502 (Deserialization of Untrusted Data), a well-known class of security issues that allows attackers to inject malicious payloads through serialized data structures. In the context of Keras, the vulnerability manifests in the TFSMLayer class, which is responsible for loading TensorFlow SavedModel layers within Keras model architectures.
When a Keras model containing a TFSMLayer is deserialized, the from_config() method processes the layer configuration without properly validating the source of external SavedModels. Even when safe_mode=True is explicitly set to prevent loading of arbitrary code, the TFSMLayer unconditionally loads external SavedModel files from paths specified in the serialized configuration.
This design flaw allows an attacker to craft a malicious .keras model file that references an attacker-controlled TensorFlow SavedModel. When the victim loads this model, the malicious SavedModel is loaded and executed during model inference, granting the attacker code execution with the victim's privileges.
Root Cause
The root cause of this vulnerability lies in three interconnected issues within the TFSMLayer implementation:
Unconditional External Loading: The TFSMLayer class loads external TensorFlow SavedModels without checking whether safe_mode protections should apply to this operation.
Serialization of File Paths: Attacker-controlled file paths can be embedded in the layer configuration and are trusted during deserialization without validation.
Missing Validation in from_config(): The from_config() method lacks proper validation to ensure that referenced SavedModels originate from trusted sources or are sandboxed appropriately.
Attack Vector
The attack vector for CVE-2026-1462 requires user interaction, typically in the form of loading a malicious .keras model file. The attack flow proceeds as follows:
An attacker creates a malicious TensorFlow SavedModel containing arbitrary code that executes during model loading or inference.
The attacker crafts a .keras model file with a TFSMLayer configuration pointing to the malicious SavedModel, either as an embedded component or via a network-accessible path.
The victim downloads or receives the malicious .keras model file, believing it to be a legitimate pre-trained model.
When the victim loads the model using Keras with safe_mode=True, the security check is bypassed, and the malicious SavedModel is loaded.
During model inference, the attacker's code executes with the victim's privileges, potentially leading to data exfiltration, system compromise, or lateral movement.
The vulnerability mechanism involves the TFSMLayer class bypassing safe_mode protections during deserialization. For technical implementation details, see the GitHub Keras Commit Update and the Huntr Security Bounty Listing.
Detection Methods for CVE-2026-1462
Indicators of Compromise
- Unexpected network connections during model loading operations, particularly to external URLs or suspicious file paths
- Model files containing TFSMLayer configurations with external or unfamiliar SavedModel references
- Unusual process spawning or system calls during Keras model inference
- Presence of .keras model files with embedded or referenced TensorFlow SavedModels from untrusted sources
Detection Strategies
- Monitor and audit all model file loading operations in machine learning pipelines for unexpected external references
- Implement file integrity checking for .keras model files before loading, comparing against known-good hashes
- Deploy endpoint detection to identify suspicious process behavior during Python/TensorFlow execution contexts
- Use SentinelOne's behavioral AI to detect anomalous code execution patterns during model inference operations
Monitoring Recommendations
- Enable verbose logging for Keras model loading operations to capture file paths and external references
- Implement network monitoring to detect unexpected outbound connections during model deserialization
- Establish baseline behavior for ML inference workloads and alert on deviations
- Monitor for unauthorized file system access patterns during model loading
How to Mitigate CVE-2026-1462
Immediate Actions Required
- Upgrade Keras to a patched version that addresses the TFSMLayer safe_mode bypass vulnerability
- Audit all existing .keras model files in production and development environments for TFSMLayer usage
- Implement strict model provenance tracking and only load models from trusted, verified sources
- Consider isolating model loading operations in sandboxed environments with restricted privileges
Patch Information
A security fix has been committed to the Keras repository. Organizations should update to the latest patched version of Keras. The fix addresses the validation gap in the from_config() method and ensures that safe_mode protections are properly enforced for TFSMLayer components.
For detailed patch information, refer to the GitHub Keras Commit Update.
Workarounds
- Avoid loading .keras model files from untrusted or unverified sources until patched
- Manually inspect model configurations for TFSMLayer components before loading
- Run model loading operations in isolated containers or virtual environments with minimal privileges
- Implement network isolation for model loading processes to prevent external SavedModel retrieval
# Verify Keras version and check for vulnerable TFSMLayer usage
pip show keras | grep Version
# Search for TFSMLayer usage in model files (requires h5py)
python -c "import keras; print(keras.__version__)"
# Run model loading in isolated environment with restricted permissions
docker run --rm --network=none -v /path/to/models:/models:ro keras-sandbox python load_model.py
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.

