CVE-2026-1669 Overview
CVE-2026-1669 is an arbitrary file read vulnerability affecting the model loading mechanism in Keras, specifically within the HDF5 integration component. This vulnerability allows a remote attacker to read local files and disclose sensitive information by crafting a malicious .keras model file that exploits HDF5 external dataset references. The vulnerability impacts Keras versions 3.0.0 through 3.13.1 across all supported platforms.
Critical Impact
Attackers can leverage crafted model files to exfiltrate sensitive data from target systems, potentially exposing credentials, configuration files, and other confidential information through the HDF5 external reference mechanism.
Affected Products
- Keras 3.0.0 through 3.13.1
- All platforms supporting Keras 3.x with HDF5 integration
- Applications loading untrusted .keras model files
Discovery Timeline
- 2026-02-11 - CVE-2026-1669 published to NVD
- 2026-02-12 - Last updated in NVD database
Technical Details for CVE-2026-1669
Vulnerability Analysis
This vulnerability is classified under CWE-73 (External Control of File Name or Path), which occurs when software constructs a file path using externally supplied input without proper validation. In this case, the Keras model loading mechanism fails to adequately sanitize external dataset references embedded within HDF5-formatted model files.
When Keras loads a .keras model file containing HDF5 data, it processes external dataset references that can point to arbitrary file paths on the local system. An attacker can craft a malicious model file with external references pointing to sensitive files such as /etc/passwd, configuration files, SSH keys, or application secrets. When the victim loads this model, Keras follows the external references and reads the contents of the specified files, exposing them to the attacker.
The network-based attack vector with no privilege requirements makes this particularly concerning for machine learning workflows that involve loading models from external or untrusted sources—a common practice in collaborative ML development and model sharing platforms.
Root Cause
The root cause lies in insufficient validation of HDF5 external dataset references during the model deserialization process. The HDF5 file format supports external links that can reference data stored in separate files. Keras's model loading implementation trusts these external references without verifying that they point to legitimate model data files rather than sensitive system files.
The vulnerability specifically manifests because:
- HDF5 external dataset references are processed without path validation
- No allowlist or blocklist restricts which files can be referenced
- The referenced file contents are read and potentially exposed during model loading operations
Attack Vector
The attack exploits the network-accessible nature of model sharing. An attacker crafts a malicious .keras model file containing HDF5 external dataset references pointing to sensitive local files on the target system. This file can be distributed through:
- Public model repositories and sharing platforms
- Phishing emails with attached model files
- Compromised model hosting services
- Supply chain attacks on ML pipelines
When a victim loads the crafted model using Keras, the HDF5 library follows the external references and reads the specified files. The attacker can then capture this information through error messages, model outputs, or by embedding exfiltration mechanisms within the model structure. This attack requires user interaction (loading the malicious model) but no authentication or privileges on the target system.
Detection Methods for CVE-2026-1669
Indicators of Compromise
- Presence of .keras model files from untrusted or unknown sources in ML pipeline directories
- Unexpected file read operations originating from Python processes running Keras/TensorFlow
- HDF5 files containing external dataset references pointing to system paths like /etc/, /home/, or Windows system directories
- Anomalous network traffic following model loading operations that may indicate data exfiltration
Detection Strategies
- Monitor file system access patterns for Keras/TensorFlow processes attempting to read files outside expected model directories
- Implement file integrity monitoring on sensitive configuration files to detect unauthorized read attempts
- Analyze incoming .keras and HDF5 files for external dataset references before loading into production environments
- Deploy application-level logging to capture model loading events and associated file operations
Monitoring Recommendations
- Configure endpoint detection to alert on Python processes accessing sensitive system files during ML operations
- Implement network segmentation to limit outbound connectivity from ML processing environments
- Enable audit logging for file read operations in directories containing sensitive data
- Deploy SentinelOne Singularity to detect and prevent unauthorized file access patterns associated with this vulnerability class
How to Mitigate CVE-2026-1669
Immediate Actions Required
- Update Keras to a patched version beyond 3.13.1 when available
- Audit all existing .keras model files in your environment for external dataset references
- Implement strict controls on model file sources, accepting only models from trusted and verified origins
- Isolate ML model loading operations in sandboxed environments with restricted file system access
Patch Information
Refer to the GitHub Security Advisories for official patch information and updated Keras releases addressing this vulnerability. Organizations should monitor the Keras project's security announcements for remediation guidance.
Workarounds
- Validate and sanitize all .keras model files before loading by inspecting HDF5 structures for external references
- Run model loading operations in containerized environments with minimal file system access and no access to sensitive files
- Implement a model vetting process that scans incoming models for potentially malicious external references before deployment
- Use file system access controls to prevent Keras processes from reading files outside designated model directories
# Example: Restrict file access for ML processes using container isolation
# Run Keras model loading in a restricted container with limited filesystem access
docker run --rm \
--read-only \
--tmpfs /tmp \
-v /path/to/trusted/models:/models:ro \
--security-opt no-new-privileges \
keras-sandbox python load_model.py /models/model.keras
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.

