The SentinelOne Annual Threat Report - A Defenders Guide from the FrontlinesThe SentinelOne Annual Threat ReportGet the Report
Experiencing a Breach?Blog
Get StartedContact Us
SentinelOne
  • Platform
    Platform Overview
    • Singularity Platform
      Welcome to Integrated Enterprise Security
    • AI for Security
      Leading the Way in AI-Powered Security Solutions
    • Securing AI
      Accelerate AI Adoption with Secure AI Tools, Apps, and Agents.
    • How It Works
      The Singularity XDR Difference
    • Singularity Marketplace
      One-Click Integrations to Unlock the Power of XDR
    • Pricing & Packaging
      Comparisons and Guidance at a Glance
    Data & AI
    • Purple AI
      Accelerate SecOps with Generative AI
    • Singularity Hyperautomation
      Easily Automate Security Processes
    • AI-SIEM
      The AI SIEM for the Autonomous SOC
    • Singularity Data Lake
      AI-Powered, Unified Data Lake
    • Singularity Data Lake for Log Analytics
      Seamlessly Ingest Data from On-Prem, Cloud or Hybrid Environments
    Endpoint Security
    • Singularity Endpoint
      Autonomous Prevention, Detection, and Response
    • Singularity XDR
      Native & Open Protection, Detection, and Response
    • Singularity RemoteOps Forensics
      Orchestrate Forensics at Scale
    • Singularity Threat Intelligence
      Comprehensive Adversary Intelligence
    • Singularity Vulnerability Management
      Application & OS Vulnerability Management
    • Singularity Identity
      Identity Threat Detection and Response
    Cloud Security
    • Singularity Cloud Security
      Block Attacks with an AI-Powered CNAPP
    • Singularity Cloud Native Security
      Secure Cloud and Development Resources
    • Singularity Cloud Workload Security
      Real-Time Cloud Workload Protection Platform
    • Singularity Cloud Data Security
      AI-Powered Threat Detection for Cloud Storage
    • Singularity Cloud Security Posture Management
      Detect and Remediate Cloud Misconfigurations
    Securing AI
    • Prompt Security
      Secure AI Tools Across Your Enterprise
  • Why SentinelOne?
    Why SentinelOne?
    • Why SentinelOne?
      Cybersecurity Built for What’s Next
    • Our Customers
      Trusted by the World’s Leading Enterprises
    • Industry Recognition
      Tested and Proven by the Experts
    • About Us
      The Industry Leader in Autonomous Cybersecurity
    Compare SentinelOne
    • Arctic Wolf
    • Broadcom
    • CrowdStrike
    • Cybereason
    • Microsoft
    • Palo Alto Networks
    • Sophos
    • Splunk
    • Trellix
    • Trend Micro
    • Wiz
    Verticals
    • Energy
    • Federal Government
    • Finance
    • Healthcare
    • Higher Education
    • K-12 Education
    • Manufacturing
    • Retail
    • State and Local Government
  • Services
    Managed Services
    • Managed Services Overview
      Wayfinder Threat Detection & Response
    • Threat Hunting
      World-Class Expertise and Threat Intelligence
    • Managed Detection & Response
      24/7/365 Expert MDR Across Your Entire Environment
    • Incident Readiness & Response
      DFIR, Breach Readiness, & Compromise Assessments
    Support, Deployment, & Health
    • Technical Account Management
      Customer Success with Personalized Service
    • SentinelOne GO
      Guided Onboarding & Deployment Advisory
    • SentinelOne University
      Live and On-Demand Training
    • Services Overview
      Comprehensive Solutions for Seamless Security Operations
    • SentinelOne Community
      Community Login
  • Partners
    Our Network
    • MSSP Partners
      Succeed Faster with SentinelOne
    • Singularity Marketplace
      Extend the Power of S1 Technology
    • Cyber Risk Partners
      Enlist Pro Response and Advisory Teams
    • Technology Alliances
      Integrated, Enterprise-Scale Solutions
    • SentinelOne for AWS
      Hosted in AWS Regions Around the World
    • Channel Partners
      Deliver the Right Solutions, Together
    • SentinelOne for Google Cloud
      Unified, Autonomous Security Giving Defenders the Advantage at Global Scale
    • Partner Locator
      Your Go-to Source for Our Top Partners in Your Region
    Partner Portal→
  • Resources
    Resource Center
    • Case Studies
    • Data Sheets
    • eBooks
    • Reports
    • Videos
    • Webinars
    • Whitepapers
    • Events
    View All Resources→
    Blog
    • Feature Spotlight
    • For CISO/CIO
    • From the Front Lines
    • Identity
    • Cloud
    • macOS
    • SentinelOne Blog
    Blog→
    Tech Resources
    • SentinelLABS
    • Ransomware Anthology
    • Cybersecurity 101
  • About
    About SentinelOne
    • About SentinelOne
      The Industry Leader in Cybersecurity
    • Investor Relations
      Financial Information & Events
    • SentinelLABS
      Threat Research for the Modern Threat Hunter
    • Careers
      The Latest Job Opportunities
    • Press & News
      Company Announcements
    • Cybersecurity Blog
      The Latest Cybersecurity Threats, News, & More
    • FAQ
      Get Answers to Our Most Frequently Asked Questions
    • DataSet
      The Live Data Platform
    • S Foundation
      Securing a Safer Future for All
    • S Ventures
      Investing in the Next Generation of Security, Data and AI
  • Pricing
Get StartedContact Us
CVE Vulnerability Database
Vulnerability Database/CVE-2025-1550

CVE-2025-1550: Keras Model RCE Vulnerability

CVE-2025-1550 is a remote code execution flaw in Keras Model.load_model that bypasses safe_mode protections, allowing attackers to execute arbitrary code via malicious .keras archives. This article covers technical details, affected versions, impact, and mitigation strategies.

Updated: January 22, 2026

CVE-2025-1550 Overview

CVE-2025-1550 is an arbitrary code execution vulnerability in the Keras deep learning library's Model.load_model function. The vulnerability allows attackers to execute arbitrary code even when safe_mode=True is explicitly set, bypassing the intended security protections. By crafting a malicious .keras archive with an altered config.json file, an attacker can specify arbitrary Python modules and functions along with their arguments to be loaded and executed during the model loading process.

Critical Impact

Attackers can achieve arbitrary code execution on systems that load untrusted Keras model files, potentially leading to complete system compromise, data exfiltration, or lateral movement within machine learning infrastructure.

Affected Products

  • Keras (all versions prior to the patch)
  • Applications using keras.Model.load_model() with untrusted model files
  • Machine learning pipelines that process external .keras archives

Discovery Timeline

  • 2025-03-11 - CVE-2025-1550 published to NVD
  • 2025-07-31 - Last updated in NVD database

Technical Details for CVE-2025-1550

Vulnerability Analysis

This vulnerability is classified as CWE-94 (Improper Control of Generation of Code - Code Injection). The flaw exists in how Keras processes the config.json file within .keras archive files during model loading operations.

The Keras library provides a safe_mode parameter intended to prevent arbitrary code execution when loading model files from untrusted sources. However, the implementation fails to adequately sanitize the configuration data, allowing attackers to bypass this protection entirely. When a model is loaded, the configuration file is parsed and its contents are used to instantiate Python objects, including specifying which modules and functions to import and execute.

The vulnerability requires local access to place a malicious model file where it will be loaded, and some user interaction is typically needed to trigger the model loading operation. However, in automated ML pipelines that process uploaded model files, this could be exploited without direct user intervention.

Root Cause

The root cause stems from insufficient validation of the config.json contents within .keras archives. The model loading mechanism allows specification of arbitrary Python modules and functions in the configuration, which are then dynamically imported and executed. The safe_mode=True parameter was intended to restrict this behavior but fails to properly block all malicious configurations.

The deserialization process trusts the configuration data to specify legitimate Keras model components, but attackers can manipulate this to reference arbitrary Python code paths, effectively turning model loading into a code execution primitive.

Attack Vector

The attack requires local access (AV:L) with low attack complexity. An attacker must:

  1. Create a valid .keras archive structure
  2. Modify the config.json file to include references to arbitrary Python modules and functions
  3. Specify malicious arguments that will be passed to the imported functions
  4. Deliver the malicious archive to a target system where it will be loaded

When a victim or automated system calls Model.load_model() on the malicious archive, the specified Python code executes regardless of the safe_mode setting. This could occur in scenarios such as:

  • Data scientists loading models shared by collaborators
  • ML platforms processing user-uploaded model files
  • Automated training pipelines loading checkpoint files

The attack exploits the trust relationship between the Keras library and its model file format, bypassing the security controls that users expect safe_mode=True to provide.

Detection Methods for CVE-2025-1550

Indicators of Compromise

  • Unexpected Python process execution when loading .keras model files
  • Suspicious entries in config.json files within .keras archives referencing non-Keras Python modules
  • Network connections or file system modifications initiated during model loading operations
  • Unusual import statements in Python process memory during Keras operations

Detection Strategies

  • Monitor for .keras archive files containing config.json entries that reference modules outside the expected Keras namespace
  • Implement file integrity monitoring on model storage directories to detect unauthorized modifications
  • Use application-level logging to track all Model.load_model() calls and their source file paths
  • Deploy behavioral analysis to identify anomalous activity during ML model loading operations

Monitoring Recommendations

  • Enable verbose logging for Keras operations in production ML pipelines
  • Implement sandbox execution for loading untrusted model files
  • Monitor for unusual Python imports or subprocess executions correlated with model loading events
  • Establish baseline behavior for ML infrastructure and alert on deviations

How to Mitigate CVE-2025-1550

Immediate Actions Required

  • Update Keras to the patched version that addresses this vulnerability
  • Audit all sources of .keras model files and establish trusted provenance
  • Avoid loading model files from untrusted or unverified sources
  • Implement input validation and sandboxing for ML model loading operations
  • Review any model files received from external parties for suspicious configuration entries

Patch Information

The Keras development team has addressed this vulnerability through Pull Request #20751. Organizations should update their Keras installations to the patched version as soon as possible.

For detailed technical analysis of the vulnerability, refer to the Tower of Hanoi CVE Writeup.

Workarounds

  • Only load .keras model files from trusted, verified sources until patching is complete
  • Manually inspect the config.json file within any .keras archive before loading, checking for references to unexpected Python modules
  • Run model loading operations in isolated container environments with minimal privileges
  • Implement network isolation for systems that process external model files to limit the impact of potential exploitation
bash
# Inspect a .keras archive before loading
# Extract and examine the config.json for suspicious module references
unzip -p model.keras config.json | python -m json.tool

# Look for any module references that are not from keras, tensorflow, or expected ML libraries
# Suspicious entries may include references to os, subprocess, sys, or other dangerous modules

Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.

  • Vulnerability Details
  • TypeRCE

  • Vendor/TechKeras

  • SeverityHIGH

  • CVSS Score7.3

  • EPSS Probability4.78%

  • Known ExploitedNo
  • CVSS Vector
  • CVSS:4.0/AV:L/AC:L/AT:P/PR:L/UI:A/VC:H/VI:H/VA:H/SC:H/SI:H/SA:H/E:X/CR:X/IR:X/AR:X/MAV:X/MAC:X/MAT:X/MPR:X/MUI:X/MVC:X/MVI:X/MVA:X/MSC:X/MSI:X/MSA:X/S:X/AU:X/R:X/V:X/RE:X/U:X
  • Impact Assessment
  • ConfidentialityLow
  • IntegrityHigh
  • AvailabilityHigh
  • CWE References
  • CWE-94
  • Technical References
  • Tower of Hanoi CVE Writeup
  • Vendor Resources
  • GitHub Pull Request
  • Related CVEs
  • CVE-2025-49655: Keras Framework RCE Vulnerability

  • CVE-2024-3660: Keras Framework RCE Vulnerability

  • CVE-2024-55459: Keras RCE Vulnerability

  • CVE-2026-1669: Keras File Read Vulnerability
Default Legacy - Prefooter | Experience the World’s Most Advanced Cybersecurity Platform

Experience the World’s Most Advanced Cybersecurity Platform

See how our intelligent, autonomous cybersecurity platform can protect your organization now and into the future.

Try SentinelOne
  • Get Started
  • Get a Demo
  • Product Tour
  • Why SentinelOne
  • Pricing & Packaging
  • FAQ
  • Contact
  • Contact Us
  • Customer Support
  • SentinelOne Status
  • Language
  • Platform
  • Singularity Platform
  • Singularity Endpoint
  • Singularity Cloud
  • Singularity AI-SIEM
  • Singularity Identity
  • Singularity Marketplace
  • Purple AI
  • Services
  • Wayfinder TDR
  • SentinelOne GO
  • Technical Account Management
  • Support Services
  • Verticals
  • Energy
  • Federal Government
  • Finance
  • Healthcare
  • Higher Education
  • K-12 Education
  • Manufacturing
  • Retail
  • State and Local Government
  • Cybersecurity for SMB
  • Resources
  • Blog
  • Labs
  • Case Studies
  • Videos
  • Product Tours
  • Events
  • Cybersecurity 101
  • eBooks
  • Webinars
  • Whitepapers
  • Press
  • News
  • Ransomware Anthology
  • Company
  • About Us
  • Our Customers
  • Careers
  • Partners
  • Legal & Compliance
  • Security & Compliance
  • Investor Relations
  • S Foundation
  • S Ventures

©2026 SentinelOne, All Rights Reserved.

Privacy Notice Terms of Use

English