The SentinelOne Annual Threat Report - A Defenders Guide from the FrontlinesThe SentinelOne Annual Threat ReportGet the Report
Experiencing a Breach?Blog
Get StartedContact Us
SentinelOne
  • Platform
    Platform Overview
    • Singularity Platform
      Welcome to Integrated Enterprise Security
    • AI for Security
      Leading the Way in AI-Powered Security Solutions
    • Securing AI
      Accelerate AI Adoption with Secure AI Tools, Apps, and Agents.
    • How It Works
      The Singularity XDR Difference
    • Singularity Marketplace
      One-Click Integrations to Unlock the Power of XDR
    • Pricing & Packaging
      Comparisons and Guidance at a Glance
    Data & AI
    • Purple AI
      Accelerate SecOps with Generative AI
    • Singularity Hyperautomation
      Easily Automate Security Processes
    • AI-SIEM
      The AI SIEM for the Autonomous SOC
    • Singularity Data Lake
      AI-Powered, Unified Data Lake
    • Singularity Data Lake for Log Analytics
      Seamlessly Ingest Data from On-Prem, Cloud or Hybrid Environments
    Endpoint Security
    • Singularity Endpoint
      Autonomous Prevention, Detection, and Response
    • Singularity XDR
      Native & Open Protection, Detection, and Response
    • Singularity RemoteOps Forensics
      Orchestrate Forensics at Scale
    • Singularity Threat Intelligence
      Comprehensive Adversary Intelligence
    • Singularity Vulnerability Management
      Application & OS Vulnerability Management
    • Singularity Identity
      Identity Threat Detection and Response
    Cloud Security
    • Singularity Cloud Security
      Block Attacks with an AI-Powered CNAPP
    • Singularity Cloud Native Security
      Secure Cloud and Development Resources
    • Singularity Cloud Workload Security
      Real-Time Cloud Workload Protection Platform
    • Singularity Cloud Data Security
      AI-Powered Threat Detection for Cloud Storage
    • Singularity Cloud Security Posture Management
      Detect and Remediate Cloud Misconfigurations
    Securing AI
    • Prompt Security
      Secure AI Tools Across Your Enterprise
  • Why SentinelOne?
    Why SentinelOne?
    • Why SentinelOne?
      Cybersecurity Built for What’s Next
    • Our Customers
      Trusted by the World’s Leading Enterprises
    • Industry Recognition
      Tested and Proven by the Experts
    • About Us
      The Industry Leader in Autonomous Cybersecurity
    Compare SentinelOne
    • Arctic Wolf
    • Broadcom
    • CrowdStrike
    • Cybereason
    • Microsoft
    • Palo Alto Networks
    • Sophos
    • Splunk
    • Trellix
    • Trend Micro
    • Wiz
    Verticals
    • Energy
    • Federal Government
    • Finance
    • Healthcare
    • Higher Education
    • K-12 Education
    • Manufacturing
    • Retail
    • State and Local Government
  • Services
    Managed Services
    • Managed Services Overview
      Wayfinder Threat Detection & Response
    • Threat Hunting
      World-Class Expertise and Threat Intelligence
    • Managed Detection & Response
      24/7/365 Expert MDR Across Your Entire Environment
    • Incident Readiness & Response
      DFIR, Breach Readiness, & Compromise Assessments
    Support, Deployment, & Health
    • Technical Account Management
      Customer Success with Personalized Service
    • SentinelOne GO
      Guided Onboarding & Deployment Advisory
    • SentinelOne University
      Live and On-Demand Training
    • Services Overview
      Comprehensive Solutions for Seamless Security Operations
    • SentinelOne Community
      Community Login
  • Partners
    Our Network
    • MSSP Partners
      Succeed Faster with SentinelOne
    • Singularity Marketplace
      Extend the Power of S1 Technology
    • Cyber Risk Partners
      Enlist Pro Response and Advisory Teams
    • Technology Alliances
      Integrated, Enterprise-Scale Solutions
    • SentinelOne for AWS
      Hosted in AWS Regions Around the World
    • Channel Partners
      Deliver the Right Solutions, Together
    • SentinelOne for Google Cloud
      Unified, Autonomous Security Giving Defenders the Advantage at Global Scale
    • Partner Locator
      Your Go-to Source for Our Top Partners in Your Region
    Partner Portal→
  • Resources
    Resource Center
    • Case Studies
    • Data Sheets
    • eBooks
    • Reports
    • Videos
    • Webinars
    • Whitepapers
    • Events
    View All Resources→
    Blog
    • Feature Spotlight
    • For CISO/CIO
    • From the Front Lines
    • Identity
    • Cloud
    • macOS
    • SentinelOne Blog
    Blog→
    Tech Resources
    • SentinelLABS
    • Ransomware Anthology
    • Cybersecurity 101
  • About
    About SentinelOne
    • About SentinelOne
      The Industry Leader in Cybersecurity
    • Investor Relations
      Financial Information & Events
    • SentinelLABS
      Threat Research for the Modern Threat Hunter
    • Careers
      The Latest Job Opportunities
    • Press & News
      Company Announcements
    • Cybersecurity Blog
      The Latest Cybersecurity Threats, News, & More
    • FAQ
      Get Answers to Our Most Frequently Asked Questions
    • DataSet
      The Live Data Platform
    • S Foundation
      Securing a Safer Future for All
    • S Ventures
      Investing in the Next Generation of Security, Data and AI
  • Pricing
Get StartedContact Us
CVE Vulnerability Database
Vulnerability Database/CVE-2025-24357

CVE-2025-24357: vLLM Library RCE Vulnerability

CVE-2025-24357 is a remote code execution flaw in vLLM library caused by unsafe pickle deserialization when loading model weights from HuggingFace. This post covers technical details, affected versions, and mitigation steps.

Updated: January 22, 2026

CVE-2025-24357 Overview

CVE-2025-24357 is an insecure deserialization vulnerability in vLLM, a popular library for Large Language Model (LLM) inference and serving. The vulnerability exists in the hf_model_weights_iterator function within vllm/model_executor/weight_utils.py, which uses PyTorch's torch.load function with the weights_only parameter set to False by default. When loading model checkpoints downloaded from HuggingFace, this unsafe configuration allows malicious pickle data to execute arbitrary code during the unpickling process.

Critical Impact

Attackers can achieve remote code execution by crafting malicious model checkpoints that execute arbitrary code when loaded by vLLM, potentially compromising AI/ML infrastructure and sensitive training data.

Affected Products

  • vLLM versions prior to v0.7.0
  • vLLM model inference and serving deployments using HuggingFace model checkpoints
  • Systems utilizing torch.load through vLLM's weight loading utilities

Discovery Timeline

  • 2025-01-27 - CVE CVE-2025-24357 published to NVD
  • 2025-06-27 - Last updated in NVD database

Technical Details for CVE-2025-24357

Vulnerability Analysis

This vulnerability is classified as CWE-502 (Deserialization of Untrusted Data). The core issue stems from Python's pickle serialization format, which is inherently unsafe when handling untrusted data. PyTorch model checkpoints are stored as pickle files, and when torch.load is called without weights_only=True, it allows arbitrary Python objects to be deserialized and instantiated, including objects with malicious __reduce__ methods that execute code during unpickling.

The attack surface is particularly concerning in vLLM's context because model checkpoints are typically downloaded from external sources like HuggingFace Hub. If an attacker can publish a malicious model or compromise an existing one, any vLLM deployment loading that checkpoint would execute the attacker's code with the privileges of the vLLM process.

Root Cause

The root cause is the use of torch.load() with the default weights_only=False parameter when loading model weights from potentially untrusted sources. According to the PyTorch Documentation for torch.load, pickle is inherently unsafe and can execute arbitrary code during unpickling. The fix requires explicitly setting weights_only=True to restrict deserialization to only tensor data, primitive types, and safe containers.

Attack Vector

The attack is network-based, requiring user interaction to load a malicious model checkpoint. An attacker would need to:

  1. Create or compromise a model repository on HuggingFace or similar platform
  2. Embed malicious pickle payloads in the model checkpoint files (.pt, .bin files)
  3. Wait for victims to download and load the malicious checkpoint using vLLM
  4. Upon loading, the malicious payload executes during torch.load() unpickling

The following patches from the official security fix demonstrate the mitigation:

Patch in vllm/assets/image.py:

python
         """
         image_path = get_vllm_public_assets(filename=f"{self.name}.pt",
                                             s3_prefix=VLM_IMAGES_DIR)
-        return torch.load(image_path, map_location="cpu")
+        return torch.load(image_path, map_location="cpu", weights_only=True)

Source: GitHub Commit Update

Patch in vllm/lora/models.py:

python
                 new_embeddings_tensor_path)
         elif os.path.isfile(new_embeddings_bin_file_path):
             embeddings = torch.load(new_embeddings_bin_file_path,
-                                    map_location=device)
+                                    map_location=device,
+                                    weights_only=True)

         return cls.from_lora_tensors(
             lora_model_id=get_lora_id()

Source: GitHub Commit Update

Detection Methods for CVE-2025-24357

Indicators of Compromise

  • Unexpected process spawning from Python/vLLM processes during model loading operations
  • Network connections initiated during model checkpoint deserialization
  • Unusual file system activity or modifications during vLLM model initialization
  • Suspicious pickle files in model cache directories containing non-standard Python objects

Detection Strategies

  • Audit vLLM deployments for versions prior to v0.7.0 using dependency scanning tools
  • Monitor for torch.load() calls without weights_only=True parameter in application code
  • Implement file integrity monitoring on model checkpoint directories
  • Use runtime application security monitoring to detect pickle deserialization attacks

Monitoring Recommendations

  • Enable verbose logging for model loading operations in vLLM deployments
  • Monitor network egress from ML inference servers for unexpected connections
  • Implement behavioral analysis for processes executing under vLLM service accounts
  • Configure alerts for suspicious Python process activity during model initialization phases

How to Mitigate CVE-2025-24357

Immediate Actions Required

  • Upgrade vLLM to version v0.7.0 or later immediately
  • Audit all custom code for torch.load() calls and ensure weights_only=True is set
  • Review and verify the integrity of all model checkpoints in use
  • Restrict model downloads to trusted and verified sources only

Patch Information

The vulnerability is fixed in vLLM version v0.7.0. The patch adds weights_only=True to all torch.load() calls throughout the codebase. For detailed patch information, refer to:

  • GitHub Pull Request #12366
  • GitHub Security Advisory GHSA-rh4j-5rhw-hr54
  • GitHub Commit Update

Workarounds

  • If immediate upgrade is not possible, manually patch all torch.load() calls to include weights_only=True
  • Implement network segmentation to isolate ML inference infrastructure from sensitive systems
  • Use model scanning tools to detect potentially malicious pickle payloads before loading
  • Consider using safetensors format instead of pickle-based checkpoints where supported
bash
# Upgrade vLLM to patched version
pip install --upgrade vllm>=0.7.0

# Verify installed version
pip show vllm | grep Version

Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.

  • Vulnerability Details
  • TypeRCE

  • Vendor/TechVllm

  • SeverityHIGH

  • CVSS Score8.8

  • EPSS Probability0.27%

  • Known ExploitedNo
  • CVSS Vector
  • CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
  • Impact Assessment
  • ConfidentialityLow
  • IntegrityHigh
  • AvailabilityHigh
  • CWE References
  • CWE-502
  • Technical References
  • PyTorch Documentation for torch.load
  • Vendor Resources
  • GitHub Commit Update

  • GitHub Pull Request #12366

  • GitHub Security Advisory GHSA-rh4j-5rhw-hr54
  • Related CVEs
  • CVE-2026-22778: vLLM ASLR Bypass and RCE Vulnerability

  • CVE-2026-22807: vLLM RCE Vulnerability

  • CVE-2025-62164: Vllm Vllm RCE Vulnerability

  • CVE-2025-66448: Vllm Vllm RCE Vulnerability
Experience the World’s Most Advanced Cybersecurity Platform

Experience the World’s Most Advanced Cybersecurity Platform

See how our intelligent, autonomous cybersecurity platform can protect your organization now and into the future.

Try SentinelOne
  • Get Started
  • Get a Demo
  • Product Tour
  • Why SentinelOne
  • Pricing & Packaging
  • FAQ
  • Contact
  • Contact Us
  • Customer Support
  • SentinelOne Status
  • Language
  • Platform
  • Singularity Platform
  • Singularity Endpoint
  • Singularity Cloud
  • Singularity AI-SIEM
  • Singularity Identity
  • Singularity Marketplace
  • Purple AI
  • Services
  • Wayfinder TDR
  • SentinelOne GO
  • Technical Account Management
  • Support Services
  • Verticals
  • Energy
  • Federal Government
  • Finance
  • Healthcare
  • Higher Education
  • K-12 Education
  • Manufacturing
  • Retail
  • State and Local Government
  • Cybersecurity for SMB
  • Resources
  • Blog
  • Labs
  • Case Studies
  • Videos
  • Product Tours
  • Events
  • Cybersecurity 101
  • eBooks
  • Webinars
  • Whitepapers
  • Press
  • News
  • Ransomware Anthology
  • Company
  • About Us
  • Our Customers
  • Careers
  • Partners
  • Legal & Compliance
  • Security & Compliance
  • Investor Relations
  • S Foundation
  • S Ventures

©2026 SentinelOne, All Rights Reserved.

Privacy Notice Terms of Use

English