The SentinelOne Annual Threat Report - A Defenders Guide from the FrontlinesThe SentinelOne Annual Threat ReportGet the Report
Experiencing a Breach?Blog
Get StartedContact Us
SentinelOne
  • Platform
    Platform Overview
    • Singularity Platform
      Welcome to Integrated Enterprise Security
    • AI for Security
      Leading the Way in AI-Powered Security Solutions
    • Securing AI
      Accelerate AI Adoption with Secure AI Tools, Apps, and Agents.
    • How It Works
      The Singularity XDR Difference
    • Singularity Marketplace
      One-Click Integrations to Unlock the Power of XDR
    • Pricing & Packaging
      Comparisons and Guidance at a Glance
    Data & AI
    • Purple AI
      Accelerate SecOps with Generative AI
    • Singularity Hyperautomation
      Easily Automate Security Processes
    • AI-SIEM
      The AI SIEM for the Autonomous SOC
    • Singularity Data Lake
      AI-Powered, Unified Data Lake
    • Singularity Data Lake for Log Analytics
      Seamlessly Ingest Data from On-Prem, Cloud or Hybrid Environments
    Endpoint Security
    • Singularity Endpoint
      Autonomous Prevention, Detection, and Response
    • Singularity XDR
      Native & Open Protection, Detection, and Response
    • Singularity RemoteOps Forensics
      Orchestrate Forensics at Scale
    • Singularity Threat Intelligence
      Comprehensive Adversary Intelligence
    • Singularity Vulnerability Management
      Application & OS Vulnerability Management
    • Singularity Identity
      Identity Threat Detection and Response
    Cloud Security
    • Singularity Cloud Security
      Block Attacks with an AI-Powered CNAPP
    • Singularity Cloud Native Security
      Secure Cloud and Development Resources
    • Singularity Cloud Workload Security
      Real-Time Cloud Workload Protection Platform
    • Singularity Cloud Data Security
      AI-Powered Threat Detection for Cloud Storage
    • Singularity Cloud Security Posture Management
      Detect and Remediate Cloud Misconfigurations
    Securing AI
    • Prompt Security
      Secure AI Tools Across Your Enterprise
  • Why SentinelOne?
    Why SentinelOne?
    • Why SentinelOne?
      Cybersecurity Built for What’s Next
    • Our Customers
      Trusted by the World’s Leading Enterprises
    • Industry Recognition
      Tested and Proven by the Experts
    • About Us
      The Industry Leader in Autonomous Cybersecurity
    Compare SentinelOne
    • Arctic Wolf
    • Broadcom
    • CrowdStrike
    • Cybereason
    • Microsoft
    • Palo Alto Networks
    • Sophos
    • Splunk
    • Trellix
    • Trend Micro
    • Wiz
    Verticals
    • Energy
    • Federal Government
    • Finance
    • Healthcare
    • Higher Education
    • K-12 Education
    • Manufacturing
    • Retail
    • State and Local Government
  • Services
    Managed Services
    • Managed Services Overview
      Wayfinder Threat Detection & Response
    • Threat Hunting
      World-Class Expertise and Threat Intelligence
    • Managed Detection & Response
      24/7/365 Expert MDR Across Your Entire Environment
    • Incident Readiness & Response
      DFIR, Breach Readiness, & Compromise Assessments
    Support, Deployment, & Health
    • Technical Account Management
      Customer Success with Personalized Service
    • SentinelOne GO
      Guided Onboarding & Deployment Advisory
    • SentinelOne University
      Live and On-Demand Training
    • Services Overview
      Comprehensive Solutions for Seamless Security Operations
    • SentinelOne Community
      Community Login
  • Partners
    Our Network
    • MSSP Partners
      Succeed Faster with SentinelOne
    • Singularity Marketplace
      Extend the Power of S1 Technology
    • Cyber Risk Partners
      Enlist Pro Response and Advisory Teams
    • Technology Alliances
      Integrated, Enterprise-Scale Solutions
    • SentinelOne for AWS
      Hosted in AWS Regions Around the World
    • Channel Partners
      Deliver the Right Solutions, Together
    • SentinelOne for Google Cloud
      Unified, Autonomous Security Giving Defenders the Advantage at Global Scale
    • Partner Locator
      Your Go-to Source for Our Top Partners in Your Region
    Partner Portal→
  • Resources
    Resource Center
    • Case Studies
    • Data Sheets
    • eBooks
    • Reports
    • Videos
    • Webinars
    • Whitepapers
    • Events
    View All Resources→
    Blog
    • Feature Spotlight
    • For CISO/CIO
    • From the Front Lines
    • Identity
    • Cloud
    • macOS
    • SentinelOne Blog
    Blog→
    Tech Resources
    • SentinelLABS
    • Ransomware Anthology
    • Cybersecurity 101
  • About
    About SentinelOne
    • About SentinelOne
      The Industry Leader in Cybersecurity
    • Investor Relations
      Financial Information & Events
    • SentinelLABS
      Threat Research for the Modern Threat Hunter
    • Careers
      The Latest Job Opportunities
    • Press & News
      Company Announcements
    • Cybersecurity Blog
      The Latest Cybersecurity Threats, News, & More
    • FAQ
      Get Answers to Our Most Frequently Asked Questions
    • DataSet
      The Live Data Platform
    • S Foundation
      Securing a Safer Future for All
    • S Ventures
      Investing in the Next Generation of Security, Data and AI
  • Pricing
Get StartedContact Us
CVE Vulnerability Database
Vulnerability Database/CVE-2026-22807

CVE-2026-22807: vLLM RCE Vulnerability

CVE-2026-22807 is a remote code execution vulnerability in vLLM that allows attackers to execute arbitrary code during model loading. This article covers technical details, affected versions, impact, and mitigation.

Published: January 23, 2026

CVE-2026-22807 Overview

CVE-2026-22807 is a code injection vulnerability affecting vLLM, a popular inference and serving engine for large language models (LLMs). The vulnerability exists in versions 0.10.1 through 0.13.x, where vLLM loads Hugging Face auto_map dynamic modules during model resolution without properly gating on the trust_remote_code configuration parameter. This allows attacker-controlled Python code embedded in a model repository or local path to execute automatically at server startup.

Critical Impact

An attacker who can influence the model repository or path (either via a local directory or a remote Hugging Face repository) can achieve arbitrary code execution on the vLLM host during model load. This occurs before any request handling begins and does not require API access.

Affected Products

  • vLLM versions 0.10.1 through 0.13.x
  • Systems loading models from untrusted Hugging Face repositories
  • Deployments using local model directories from untrusted sources

Discovery Timeline

  • 2026-01-21 - CVE CVE-2026-22807 published to NVD
  • 2026-01-21 - Last updated in NVD database

Technical Details for CVE-2026-22807

Vulnerability Analysis

This vulnerability is classified as CWE-94 (Improper Control of Generation of Code - Code Injection). The flaw resides in vLLM's model loading mechanism, specifically in how it handles Hugging Face's dynamic module system. When vLLM resolves a model, it processes the auto_map configuration from the model's config.json file, which can specify custom Python modules to load.

The vulnerable code path fails to check whether trust_remote_code is enabled before loading these dynamic modules. This means that even if a user has not explicitly trusted remote code execution, vLLM will still execute arbitrary Python code specified in the model configuration during the initialization phase.

The attack is particularly dangerous because it occurs at server startup—before any authentication, request handling, or security controls are in place. An attacker only needs to convince a victim to load a malicious model, either by:

  1. Hosting a malicious model on Hugging Face
  2. Placing a malicious model in a local directory that will be loaded

Root Cause

The root cause is insufficient access control during dynamic module loading. The trust_remote_code parameter, which is designed to gate the execution of untrusted Python code from model repositories, was not being passed through the call chain to the dynamic module resolution functions. This allowed the auto_map feature to load and execute arbitrary Python modules regardless of the user's security configuration.

Attack Vector

The attack vector is network-based, requiring user interaction to load a malicious model. An attacker can craft a malicious Hugging Face model repository containing:

  1. A config.json file with an auto_map entry pointing to a custom Python module
  2. A Python module file containing malicious code that executes during import

When the victim loads this model using vLLM, the malicious Python code executes with the privileges of the vLLM process, potentially allowing full system compromise.

The following patch demonstrates the fix applied in vLLM 0.14.0:

python
                         module,
                         model_config.model,
                         revision=model_config.revision,
+                        trust_remote_code=model_config.trust_remote_code,
                         warn_on_fail=False,
                     )
 

Source: GitHub Commit

The fix ensures that the trust_remote_code parameter is properly propagated to the dynamic module loading functions:

python
 # SPDX-FileCopyrightText: Copyright contributors to the vLLM project
 import os
 
-from transformers.dynamic_module_utils import get_class_from_dynamic_module
+from transformers.dynamic_module_utils import (
+    get_class_from_dynamic_module,
+    resolve_trust_remote_code,
+)
 
 import vllm.envs as envs
 from vllm.logger import init_logger

Source: GitHub Commit

Detection Methods for CVE-2026-22807

Indicators of Compromise

  • Unexpected Python process spawning during vLLM model loading
  • Unusual network connections originating from vLLM server processes during startup
  • Modified or suspicious config.json files in model directories containing auto_map entries
  • Presence of unfamiliar Python modules in model repositories or local model directories

Detection Strategies

  • Monitor vLLM startup logs for dynamic module loading from untrusted sources
  • Implement file integrity monitoring on model directories to detect unauthorized modifications
  • Use application-level logging to track which models are loaded and their sources
  • Deploy network monitoring to identify unexpected outbound connections during model initialization

Monitoring Recommendations

  • Enable verbose logging for vLLM model loading operations
  • Implement alerts for model loads from non-whitelisted Hugging Face repositories
  • Monitor process execution chains to detect suspicious child processes spawned by vLLM
  • Audit trust_remote_code configuration settings across all vLLM deployments

How to Mitigate CVE-2026-22807

Immediate Actions Required

  • Upgrade vLLM to version 0.14.0 or later immediately
  • Audit all currently loaded models for suspicious auto_map configurations
  • Only load models from trusted and verified sources until patching is complete
  • Review and restrict trust_remote_code settings in existing deployments

Patch Information

The vulnerability has been fixed in vLLM version 0.14.0. The patch ensures that the trust_remote_code parameter is properly respected when loading dynamic modules from model configurations. Organizations should upgrade to the patched version immediately.

For detailed patch information, see:

  • GitHub Pull Request #32194
  • vLLM Release v0.14.0
  • Security Advisory GHSA-2pc9-4j83-qjmr

Workarounds

  • Only load models from trusted, verified sources (official Hugging Face repositories or internally vetted models)
  • Implement network isolation for vLLM servers to limit the impact of potential code execution
  • Run vLLM processes with minimal privileges using containerization or sandboxing
  • Manually inspect config.json files for auto_map entries before loading any new models
bash
# Configuration example
# Verify vLLM version before deployment
pip show vllm | grep Version

# Upgrade to patched version
pip install --upgrade vllm>=0.14.0

# Audit model configurations for auto_map entries
find /path/to/models -name "config.json" -exec grep -l "auto_map" {} \;

Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.

  • Vulnerability Details
  • TypeRCE

  • Vendor/TechVllm

  • SeverityHIGH

  • CVSS Score8.8

  • EPSS Probability0.05%

  • Known ExploitedNo
  • CVSS Vector
  • CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
  • Impact Assessment
  • ConfidentialityLow
  • IntegrityHigh
  • AvailabilityHigh
  • CWE References
  • CWE-94
  • Technical References
  • GitHub Commit Update

  • GitHub Pull Request

  • GitHub Release v0.14.0

  • GitHub Security Advisory GHSA-2pc9-4j83-qjmr
  • Related CVEs
  • CVE-2026-22778: vLLM ASLR Bypass and RCE Vulnerability

  • CVE-2025-62164: Vllm Vllm RCE Vulnerability

  • CVE-2025-66448: Vllm Vllm RCE Vulnerability

  • CVE-2025-30165: Vllm Engine RCE Vulnerability
Experience the World’s Most Advanced Cybersecurity Platform

Experience the World’s Most Advanced Cybersecurity Platform

See how our intelligent, autonomous cybersecurity platform can protect your organization now and into the future.

Try SentinelOne
  • Get Started
  • Get a Demo
  • Product Tour
  • Why SentinelOne
  • Pricing & Packaging
  • FAQ
  • Contact
  • Contact Us
  • Customer Support
  • SentinelOne Status
  • Language
  • Platform
  • Singularity Platform
  • Singularity Endpoint
  • Singularity Cloud
  • Singularity AI-SIEM
  • Singularity Identity
  • Singularity Marketplace
  • Purple AI
  • Services
  • Wayfinder TDR
  • SentinelOne GO
  • Technical Account Management
  • Support Services
  • Verticals
  • Energy
  • Federal Government
  • Finance
  • Healthcare
  • Higher Education
  • K-12 Education
  • Manufacturing
  • Retail
  • State and Local Government
  • Cybersecurity for SMB
  • Resources
  • Blog
  • Labs
  • Case Studies
  • Videos
  • Product Tours
  • Events
  • Cybersecurity 101
  • eBooks
  • Webinars
  • Whitepapers
  • Press
  • News
  • Ransomware Anthology
  • Company
  • About Us
  • Our Customers
  • Careers
  • Partners
  • Legal & Compliance
  • Security & Compliance
  • Investor Relations
  • S Foundation
  • S Ventures

©2026 SentinelOne, All Rights Reserved.

Privacy Notice Terms of Use

English