Join the Cyber Forum: Threat Intel on May 12, 2026 to learn how AI is reshaping threat defense.Join the Virtual Cyber Forum: Threat IntelRegister Now
Experiencing a Breach?Blog
Get StartedContact Us
SentinelOne
  • Platform
    Platform Overview
    • Singularity Platform
      Welcome to Integrated Enterprise Security
    • AI for Security
      Leading the Way in AI-Powered Security Solutions
    • Securing AI
      Accelerate AI Adoption with Secure AI Tools, Apps, and Agents.
    • How It Works
      The Singularity XDR Difference
    • Singularity Marketplace
      One-Click Integrations to Unlock the Power of XDR
    • Pricing & Packaging
      Comparisons and Guidance at a Glance
    Data & AI
    • Purple AI
      Accelerate SecOps with Generative AI
    • Singularity Hyperautomation
      Easily Automate Security Processes
    • AI-SIEM
      The AI SIEM for the Autonomous SOC
    • AI Data Pipelines
      Security Data Pipeline for AI SIEM and Data Optimization
    • Singularity Data Lake
      AI-Powered, Unified Data Lake
    • Singularity Data Lake for Log Analytics
      Seamlessly Ingest Data from On-Prem, Cloud or Hybrid Environments
    Endpoint Security
    • Singularity Endpoint
      Autonomous Prevention, Detection, and Response
    • Singularity XDR
      Native & Open Protection, Detection, and Response
    • Singularity RemoteOps Forensics
      Orchestrate Forensics at Scale
    • Singularity Threat Intelligence
      Comprehensive Adversary Intelligence
    • Singularity Vulnerability Management
      Application & OS Vulnerability Management
    • Singularity Identity
      Identity Threat Detection and Response
    Cloud Security
    • Singularity Cloud Security
      Block Attacks with an AI-Powered CNAPP
    • Singularity Cloud Native Security
      Secure Cloud and Development Resources
    • Singularity Cloud Workload Security
      Real-Time Cloud Workload Protection Platform
    • Singularity Cloud Data Security
      AI-Powered Threat Detection for Cloud Storage
    • Singularity Cloud Security Posture Management
      Detect and Remediate Cloud Misconfigurations
    Securing AI
    • Prompt Security
      Secure AI Tools Across Your Enterprise
  • Why SentinelOne?
    Why SentinelOne?
    • Why SentinelOne?
      Cybersecurity Built for What’s Next
    • Our Customers
      Trusted by the World’s Leading Enterprises
    • Industry Recognition
      Tested and Proven by the Experts
    • About Us
      The Industry Leader in Autonomous Cybersecurity
    Compare SentinelOne
    • Arctic Wolf
    • Broadcom
    • CrowdStrike
    • Cybereason
    • Microsoft
    • Palo Alto Networks
    • Sophos
    • Splunk
    • Trellix
    • Trend Micro
    • Wiz
    Verticals
    • Energy
    • Federal Government
    • Finance
    • Healthcare
    • Higher Education
    • K-12 Education
    • Manufacturing
    • Retail
    • State and Local Government
  • Services
    Managed Services
    • Managed Services Overview
      Wayfinder Threat Detection & Response
    • Threat Hunting
      World-Class Expertise and Threat Intelligence
    • Managed Detection & Response
      24/7/365 Expert MDR Across Your Entire Environment
    • Incident Readiness & Response
      DFIR, Breach Readiness, & Compromise Assessments
    Support, Deployment, & Health
    • Technical Account Management
      Customer Success with Personalized Service
    • SentinelOne GO
      Guided Onboarding & Deployment Advisory
    • SentinelOne University
      Live and On-Demand Training
    • Services Overview
      Comprehensive Solutions for Seamless Security Operations
    • SentinelOne Community
      Community Login
  • Partners
    Our Network
    • MSSP Partners
      Succeed Faster with SentinelOne
    • Singularity Marketplace
      Extend the Power of S1 Technology
    • Cyber Risk Partners
      Enlist Pro Response and Advisory Teams
    • Technology Alliances
      Integrated, Enterprise-Scale Solutions
    • SentinelOne for AWS
      Hosted in AWS Regions Around the World
    • Channel Partners
      Deliver the Right Solutions, Together
    • SentinelOne for Google Cloud
      Unified, Autonomous Security Giving Defenders the Advantage at Global Scale
    • Partner Locator
      Your Go-to Source for Our Top Partners in Your Region
    Partner Portal→
  • Resources
    Resource Center
    • Case Studies
    • Data Sheets
    • eBooks
    • Reports
    • Videos
    • Webinars
    • Whitepapers
    • Events
    View All Resources→
    Blog
    • Feature Spotlight
    • For CISO/CIO
    • From the Front Lines
    • Identity
    • Cloud
    • macOS
    • SentinelOne Blog
    Blog→
    Tech Resources
    • SentinelLABS
    • Ransomware Anthology
    • Cybersecurity 101
  • About
    About SentinelOne
    • About SentinelOne
      The Industry Leader in Cybersecurity
    • Investor Relations
      Financial Information & Events
    • SentinelLABS
      Threat Research for the Modern Threat Hunter
    • Careers
      The Latest Job Opportunities
    • Press & News
      Company Announcements
    • Cybersecurity Blog
      The Latest Cybersecurity Threats, News, & More
    • FAQ
      Get Answers to Our Most Frequently Asked Questions
    • DataSet
      The Live Data Platform
    • S Foundation
      Securing a Safer Future for All
    • S Ventures
      Investing in the Next Generation of Security, Data and AI
  • Pricing
Get StartedContact Us
CVE Vulnerability Database
Vulnerability Database/CVE-2026-1462

CVE-2026-1462: Keras Package RCE Vulnerability

CVE-2026-1462 is a remote code execution flaw in Keras package version 3.13.0 that bypasses safe_mode protections during model deserialization. This article covers technical details, affected versions, and mitigation.

Published: April 17, 2026

CVE-2026-1462 Overview

A critical insecure deserialization vulnerability has been identified in the TFSMLayer class of the Keras deep learning library, version 3.13.0. This vulnerability allows attackers to bypass the security guarantees of safe_mode=True when loading .keras model files, enabling arbitrary code execution during model inference under the victim's privileges.

The flaw stems from the unconditional loading of external TensorFlow SavedModels during deserialization, combined with the serialization of attacker-controlled file paths and insufficient validation in the from_config() method. This creates a dangerous attack surface where malicious model files can execute arbitrary code when loaded by unsuspecting users.

Critical Impact

Attackers can craft malicious .keras model files that execute arbitrary code during model inference, completely bypassing Keras safe_mode protections and compromising the victim's system.

Affected Products

  • Keras version 3.13.0
  • Applications using the TFSMLayer class for model deserialization
  • Machine learning pipelines loading untrusted .keras model files

Discovery Timeline

  • 2026-04-13 - CVE CVE-2026-1462 published to NVD
  • 2026-04-13 - Last updated in NVD database

Technical Details for CVE-2026-1462

Vulnerability Analysis

This vulnerability is classified as CWE-502 (Deserialization of Untrusted Data), a well-known class of security issues that allows attackers to inject malicious payloads through serialized data structures. In the context of Keras, the vulnerability manifests in the TFSMLayer class, which is responsible for loading TensorFlow SavedModel layers within Keras model architectures.

When a Keras model containing a TFSMLayer is deserialized, the from_config() method processes the layer configuration without properly validating the source of external SavedModels. Even when safe_mode=True is explicitly set to prevent loading of arbitrary code, the TFSMLayer unconditionally loads external SavedModel files from paths specified in the serialized configuration.

This design flaw allows an attacker to craft a malicious .keras model file that references an attacker-controlled TensorFlow SavedModel. When the victim loads this model, the malicious SavedModel is loaded and executed during model inference, granting the attacker code execution with the victim's privileges.

Root Cause

The root cause of this vulnerability lies in three interconnected issues within the TFSMLayer implementation:

  1. Unconditional External Loading: The TFSMLayer class loads external TensorFlow SavedModels without checking whether safe_mode protections should apply to this operation.

  2. Serialization of File Paths: Attacker-controlled file paths can be embedded in the layer configuration and are trusted during deserialization without validation.

  3. Missing Validation in from_config(): The from_config() method lacks proper validation to ensure that referenced SavedModels originate from trusted sources or are sandboxed appropriately.

Attack Vector

The attack vector for CVE-2026-1462 requires user interaction, typically in the form of loading a malicious .keras model file. The attack flow proceeds as follows:

  1. An attacker creates a malicious TensorFlow SavedModel containing arbitrary code that executes during model loading or inference.

  2. The attacker crafts a .keras model file with a TFSMLayer configuration pointing to the malicious SavedModel, either as an embedded component or via a network-accessible path.

  3. The victim downloads or receives the malicious .keras model file, believing it to be a legitimate pre-trained model.

  4. When the victim loads the model using Keras with safe_mode=True, the security check is bypassed, and the malicious SavedModel is loaded.

  5. During model inference, the attacker's code executes with the victim's privileges, potentially leading to data exfiltration, system compromise, or lateral movement.

The vulnerability mechanism involves the TFSMLayer class bypassing safe_mode protections during deserialization. For technical implementation details, see the GitHub Keras Commit Update and the Huntr Security Bounty Listing.

Detection Methods for CVE-2026-1462

Indicators of Compromise

  • Unexpected network connections during model loading operations, particularly to external URLs or suspicious file paths
  • Model files containing TFSMLayer configurations with external or unfamiliar SavedModel references
  • Unusual process spawning or system calls during Keras model inference
  • Presence of .keras model files with embedded or referenced TensorFlow SavedModels from untrusted sources

Detection Strategies

  • Monitor and audit all model file loading operations in machine learning pipelines for unexpected external references
  • Implement file integrity checking for .keras model files before loading, comparing against known-good hashes
  • Deploy endpoint detection to identify suspicious process behavior during Python/TensorFlow execution contexts
  • Use SentinelOne's behavioral AI to detect anomalous code execution patterns during model inference operations

Monitoring Recommendations

  • Enable verbose logging for Keras model loading operations to capture file paths and external references
  • Implement network monitoring to detect unexpected outbound connections during model deserialization
  • Establish baseline behavior for ML inference workloads and alert on deviations
  • Monitor for unauthorized file system access patterns during model loading

How to Mitigate CVE-2026-1462

Immediate Actions Required

  • Upgrade Keras to a patched version that addresses the TFSMLayer safe_mode bypass vulnerability
  • Audit all existing .keras model files in production and development environments for TFSMLayer usage
  • Implement strict model provenance tracking and only load models from trusted, verified sources
  • Consider isolating model loading operations in sandboxed environments with restricted privileges

Patch Information

A security fix has been committed to the Keras repository. Organizations should update to the latest patched version of Keras. The fix addresses the validation gap in the from_config() method and ensures that safe_mode protections are properly enforced for TFSMLayer components.

For detailed patch information, refer to the GitHub Keras Commit Update.

Workarounds

  • Avoid loading .keras model files from untrusted or unverified sources until patched
  • Manually inspect model configurations for TFSMLayer components before loading
  • Run model loading operations in isolated containers or virtual environments with minimal privileges
  • Implement network isolation for model loading processes to prevent external SavedModel retrieval
bash
# Verify Keras version and check for vulnerable TFSMLayer usage
pip show keras | grep Version

# Search for TFSMLayer usage in model files (requires h5py)
python -c "import keras; print(keras.__version__)"

# Run model loading in isolated environment with restricted permissions
docker run --rm --network=none -v /path/to/models:/models:ro keras-sandbox python load_model.py

Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.

  • Vulnerability Details
  • TypeRCE

  • Vendor/TechKeras

  • SeverityHIGH

  • CVSS Score8.8

  • EPSS Probability0.06%

  • Known ExploitedNo
  • CVSS Vector
  • CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
  • Impact Assessment
  • ConfidentialityLow
  • IntegrityHigh
  • AvailabilityHigh
  • CWE References
  • CWE-502
  • Technical References
  • GitHub Keras Commit Update

  • Huntr Security Bounty Listing
  • Related CVEs
  • CVE-2025-49655: Keras Framework RCE Vulnerability

  • CVE-2024-3660: Keras Framework RCE Vulnerability

  • CVE-2024-55459: Keras RCE Vulnerability

  • CVE-2025-1550: Keras Model RCE Vulnerability
Default Legacy - Prefooter | Experience the World’s Most Advanced Cybersecurity Platform

Experience the World’s Most Advanced Cybersecurity Platform

See how our intelligent, autonomous cybersecurity platform can protect your organization now and into the future.

Try SentinelOne
  • Get Started
  • Get a Demo
  • Product Tour
  • Why SentinelOne
  • Pricing & Packaging
  • FAQ
  • Contact
  • Contact Us
  • Customer Support
  • SentinelOne Status
  • Language
  • Platform
  • Singularity Platform
  • Singularity Endpoint
  • Singularity Cloud
  • Singularity AI-SIEM
  • Singularity Identity
  • Singularity Marketplace
  • Purple AI
  • Services
  • Wayfinder TDR
  • SentinelOne GO
  • Technical Account Management
  • Support Services
  • Verticals
  • Energy
  • Federal Government
  • Finance
  • Healthcare
  • Higher Education
  • K-12 Education
  • Manufacturing
  • Retail
  • State and Local Government
  • Cybersecurity for SMB
  • Resources
  • Blog
  • Labs
  • Case Studies
  • Videos
  • Product Tours
  • Events
  • Cybersecurity 101
  • eBooks
  • Webinars
  • Whitepapers
  • Press
  • News
  • Ransomware Anthology
  • Company
  • About Us
  • Our Customers
  • Careers
  • Partners
  • Legal & Compliance
  • Security & Compliance
  • Investor Relations
  • S Foundation
  • S Ventures

©2026 SentinelOne, All Rights Reserved.

Privacy Notice Terms of Use

English