Join the Cyber Forum: Threat Intel on May 12, 2026 to learn how AI is reshaping threat defense.Join the Virtual Cyber Forum: Threat IntelRegister Now
Experiencing a Breach?Blog
Get StartedContact Us
SentinelOne
  • Platform
    Platform Overview
    • Singularity Platform
      Welcome to Integrated Enterprise Security
    • AI for Security
      Leading the Way in AI-Powered Security Solutions
    • Securing AI
      Accelerate AI Adoption with Secure AI Tools, Apps, and Agents.
    • How It Works
      The Singularity XDR Difference
    • Singularity Marketplace
      One-Click Integrations to Unlock the Power of XDR
    • Pricing & Packaging
      Comparisons and Guidance at a Glance
    Data & AI
    • Purple AI
      Accelerate SecOps with Generative AI
    • Singularity Hyperautomation
      Easily Automate Security Processes
    • AI-SIEM
      The AI SIEM for the Autonomous SOC
    • AI Data Pipelines
      Security Data Pipeline for AI SIEM and Data Optimization
    • Singularity Data Lake
      AI-Powered, Unified Data Lake
    • Singularity Data Lake for Log Analytics
      Seamlessly Ingest Data from On-Prem, Cloud or Hybrid Environments
    Endpoint Security
    • Singularity Endpoint
      Autonomous Prevention, Detection, and Response
    • Singularity XDR
      Native & Open Protection, Detection, and Response
    • Singularity RemoteOps Forensics
      Orchestrate Forensics at Scale
    • Singularity Threat Intelligence
      Comprehensive Adversary Intelligence
    • Singularity Vulnerability Management
      Application & OS Vulnerability Management
    • Singularity Identity
      Identity Threat Detection and Response
    Cloud Security
    • Singularity Cloud Security
      Block Attacks with an AI-Powered CNAPP
    • Singularity Cloud Native Security
      Secure Cloud and Development Resources
    • Singularity Cloud Workload Security
      Real-Time Cloud Workload Protection Platform
    • Singularity Cloud Data Security
      AI-Powered Threat Detection for Cloud Storage
    • Singularity Cloud Security Posture Management
      Detect and Remediate Cloud Misconfigurations
    Securing AI
    • Prompt Security
      Secure AI Tools Across Your Enterprise
  • Why SentinelOne?
    Why SentinelOne?
    • Why SentinelOne?
      Cybersecurity Built for What’s Next
    • Our Customers
      Trusted by the World’s Leading Enterprises
    • Industry Recognition
      Tested and Proven by the Experts
    • About Us
      The Industry Leader in Autonomous Cybersecurity
    Compare SentinelOne
    • Arctic Wolf
    • Broadcom
    • CrowdStrike
    • Cybereason
    • Microsoft
    • Palo Alto Networks
    • Sophos
    • Splunk
    • Trellix
    • Trend Micro
    • Wiz
    Verticals
    • Energy
    • Federal Government
    • Finance
    • Healthcare
    • Higher Education
    • K-12 Education
    • Manufacturing
    • Retail
    • State and Local Government
  • Services
    Managed Services
    • Managed Services Overview
      Wayfinder Threat Detection & Response
    • Threat Hunting
      World-Class Expertise and Threat Intelligence
    • Managed Detection & Response
      24/7/365 Expert MDR Across Your Entire Environment
    • Incident Readiness & Response
      DFIR, Breach Readiness, & Compromise Assessments
    Support, Deployment, & Health
    • Technical Account Management
      Customer Success with Personalized Service
    • SentinelOne GO
      Guided Onboarding & Deployment Advisory
    • SentinelOne University
      Live and On-Demand Training
    • Services Overview
      Comprehensive Solutions for Seamless Security Operations
    • SentinelOne Community
      Community Login
  • Partners
    Our Network
    • MSSP Partners
      Succeed Faster with SentinelOne
    • Singularity Marketplace
      Extend the Power of S1 Technology
    • Cyber Risk Partners
      Enlist Pro Response and Advisory Teams
    • Technology Alliances
      Integrated, Enterprise-Scale Solutions
    • SentinelOne for AWS
      Hosted in AWS Regions Around the World
    • Channel Partners
      Deliver the Right Solutions, Together
    • SentinelOne for Google Cloud
      Unified, Autonomous Security Giving Defenders the Advantage at Global Scale
    • Partner Locator
      Your Go-to Source for Our Top Partners in Your Region
    Partner Portal→
  • Resources
    Resource Center
    • Case Studies
    • Data Sheets
    • eBooks
    • Reports
    • Videos
    • Webinars
    • Whitepapers
    • Events
    View All Resources→
    Blog
    • Feature Spotlight
    • For CISO/CIO
    • From the Front Lines
    • Identity
    • Cloud
    • macOS
    • SentinelOne Blog
    Blog→
    Tech Resources
    • SentinelLABS
    • Ransomware Anthology
    • Cybersecurity 101
  • About
    About SentinelOne
    • About SentinelOne
      The Industry Leader in Cybersecurity
    • Investor Relations
      Financial Information & Events
    • SentinelLABS
      Threat Research for the Modern Threat Hunter
    • Careers
      The Latest Job Opportunities
    • Press & News
      Company Announcements
    • Cybersecurity Blog
      The Latest Cybersecurity Threats, News, & More
    • FAQ
      Get Answers to Our Most Frequently Asked Questions
    • DataSet
      The Live Data Platform
    • S Foundation
      Securing a Safer Future for All
    • S Ventures
      Investing in the Next Generation of Security, Data and AI
  • Pricing
Get StartedContact Us
CVE Vulnerability Database
Vulnerability Database/CVE-2024-3660

CVE-2024-3660: Keras Framework RCE Vulnerability

CVE-2024-3660 is a remote code execution flaw in TensorFlow's Keras framework (versions before 2.13) that enables attackers to execute arbitrary code. This article covers technical details, affected versions, and mitigation.

Updated: January 22, 2026

CVE-2024-3660 Overview

CVE-2024-3660 is an arbitrary code injection vulnerability affecting TensorFlow's Keras framework in versions prior to 2.13. This vulnerability allows attackers to execute arbitrary code with the same permissions as the application by exploiting malicious machine learning models. The flaw enables code execution irrespective of the intended application behavior, making it particularly dangerous in environments where untrusted models may be loaded.

Critical Impact

Attackers can achieve arbitrary code execution by crafting malicious Keras models, potentially compromising entire machine learning pipelines and underlying infrastructure with full application privileges.

Affected Products

  • Keras versions prior to 2.13
  • TensorFlow installations using vulnerable Keras components
  • Applications loading untrusted or third-party Keras models

Discovery Timeline

  • April 16, 2024 - CVE-2024-3660 published to NVD
  • September 23, 2025 - Last updated in NVD database

Technical Details for CVE-2024-3660

Vulnerability Analysis

This vulnerability falls under CWE-94 (Improper Control of Generation of Code - Code Injection). The flaw exists in how Keras handles model deserialization, allowing malicious code embedded within model files to execute during the loading process. When an application loads a compromised Keras model, the attacker's payload executes with the full privileges of the host application.

The network-accessible nature of this vulnerability means that applications serving or processing models from external sources—such as model repositories, user uploads, or remote APIs—are particularly at risk. No authentication is required, and no user interaction is necessary beyond the application loading the malicious model.

Root Cause

The root cause stems from insufficient validation and sandboxing during Keras model deserialization. Keras model files (typically in .h5 or SavedModel format) can contain Lambda layers or custom objects that execute arbitrary Python code when the model is loaded. Prior to version 2.13, Keras did not adequately restrict or sanitize these code execution pathways, allowing attackers to embed malicious payloads that execute automatically during keras.models.load_model() or similar operations.

Attack Vector

The attack vector leverages the model loading functionality in Keras applications. An attacker can craft a malicious model file containing embedded code within Lambda layers, custom layer definitions, or serialized Python objects. When a victim application loads this model—whether from a file share, model repository, API endpoint, or user upload—the malicious code executes immediately.

Attack scenarios include:

  • Uploading malicious models to public model repositories
  • Compromising ML pipelines that process third-party models
  • Man-in-the-middle attacks substituting legitimate models with malicious ones
  • Social engineering developers to test or evaluate poisoned models

The vulnerability mechanism exploits Keras's Lambda layer functionality and custom object deserialization. When a model containing malicious code is loaded using functions like keras.models.load_model(), the embedded payload executes during the deserialization process. This occurs because Keras relies on Python's serialization mechanisms without adequate sandboxing. For detailed technical analysis, see the CERT Vulnerability Advisory #253266.

Detection Methods for CVE-2024-3660

Indicators of Compromise

  • Unexpected process spawning or network connections originating from ML application processes
  • Suspicious Lambda layers or custom objects within Keras model files containing encoded or obfuscated code
  • Anomalous file system activity during model loading operations
  • Unexpected system calls or privilege escalation attempts from Python/TensorFlow processes

Detection Strategies

  • Implement static analysis scanning of model files before loading to detect suspicious Lambda layers or custom objects
  • Monitor application behavior during model loading operations for unexpected code execution patterns
  • Deploy runtime application self-protection (RASP) solutions to detect code injection attempts
  • Audit model provenance and implement integrity verification for all loaded models

Monitoring Recommendations

  • Enable comprehensive logging for all model loading operations in production environments
  • Implement file integrity monitoring on model storage locations
  • Configure alerting for unusual process behavior from ML application containers or services
  • Monitor network connections initiated by ML workloads for unexpected outbound communications

How to Mitigate CVE-2024-3660

Immediate Actions Required

  • Upgrade Keras to version 2.13 or later immediately across all environments
  • Audit all currently deployed Keras models for suspicious Lambda layers or custom objects
  • Implement model provenance verification to ensure only trusted models are loaded
  • Restrict model loading sources to verified, trusted repositories only

Patch Information

The vulnerability is addressed in Keras version 2.13 and later. Organizations should update their TensorFlow/Keras installations to the latest stable version. For environments where immediate upgrades are not feasible, implement the workarounds below and prioritize upgrade planning.

For additional guidance, refer to the CERT Vulnerability Advisory #253266.

Workarounds

  • Avoid loading models from untrusted or unverified sources until patched
  • Use safe_mode=True parameter when loading models (available in newer Keras versions) to disable Lambda layer execution
  • Implement model sandboxing by loading untrusted models in isolated container environments with restricted privileges
  • Perform manual code review of model files before deployment, specifically examining Lambda layers and custom objects
bash
# Configuration example
# Upgrade Keras to patched version
pip install --upgrade keras>=2.13

# Verify installed version
python -c "import keras; print(keras.__version__)"

# For TensorFlow integrated environments
pip install --upgrade tensorflow>=2.13

# Container isolation for model loading (Docker example)
docker run --rm --read-only --network none \
  -v /path/to/model:/model:ro \
  your-ml-image python validate_model.py /model/untrusted.h5

Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.

  • Vulnerability Details
  • TypeRCE

  • Vendor/TechKeras

  • SeverityCRITICAL

  • CVSS Score9.8

  • EPSS Probability0.26%

  • Known ExploitedNo
  • CVSS Vector
  • CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
  • Impact Assessment
  • ConfidentialityLow
  • IntegrityNone
  • AvailabilityHigh
  • CWE References
  • CWE-94
  • Technical References
  • CERT Vulnerability Advisory #253266

  • CERT Vulnerability Advisory #253266
  • Related CVEs
  • CVE-2026-1462: Keras Package RCE Vulnerability

  • CVE-2025-49655: Keras Framework RCE Vulnerability

  • CVE-2024-55459: Keras RCE Vulnerability

  • CVE-2025-1550: Keras Model RCE Vulnerability
Default Legacy - Prefooter | Experience the World’s Most Advanced Cybersecurity Platform

Experience the World’s Most Advanced Cybersecurity Platform

See how our intelligent, autonomous cybersecurity platform can protect your organization now and into the future.

Try SentinelOne
  • Get Started
  • Get a Demo
  • Product Tour
  • Why SentinelOne
  • Pricing & Packaging
  • FAQ
  • Contact
  • Contact Us
  • Customer Support
  • SentinelOne Status
  • Language
  • Platform
  • Singularity Platform
  • Singularity Endpoint
  • Singularity Cloud
  • Singularity AI-SIEM
  • Singularity Identity
  • Singularity Marketplace
  • Purple AI
  • Services
  • Wayfinder TDR
  • SentinelOne GO
  • Technical Account Management
  • Support Services
  • Verticals
  • Energy
  • Federal Government
  • Finance
  • Healthcare
  • Higher Education
  • K-12 Education
  • Manufacturing
  • Retail
  • State and Local Government
  • Cybersecurity for SMB
  • Resources
  • Blog
  • Labs
  • Case Studies
  • Videos
  • Product Tours
  • Events
  • Cybersecurity 101
  • eBooks
  • Webinars
  • Whitepapers
  • Press
  • News
  • Ransomware Anthology
  • Company
  • About Us
  • Our Customers
  • Careers
  • Partners
  • Legal & Compliance
  • Security & Compliance
  • Investor Relations
  • S Foundation
  • S Ventures

©2026 SentinelOne, All Rights Reserved.

Privacy Notice Terms of Use

English