CVE-2026-22807 Overview
CVE-2026-22807 is a code injection vulnerability affecting vLLM, a popular inference and serving engine for large language models (LLMs). The vulnerability exists in versions 0.10.1 through 0.13.x, where vLLM loads Hugging Face auto_map dynamic modules during model resolution without properly gating on the trust_remote_code configuration parameter. This allows attacker-controlled Python code embedded in a model repository or local path to execute automatically at server startup.
Critical Impact
An attacker who can influence the model repository or path (either via a local directory or a remote Hugging Face repository) can achieve arbitrary code execution on the vLLM host during model load. This occurs before any request handling begins and does not require API access.
Affected Products
- vLLM versions 0.10.1 through 0.13.x
- Systems loading models from untrusted Hugging Face repositories
- Deployments using local model directories from untrusted sources
Discovery Timeline
- 2026-01-21 - CVE CVE-2026-22807 published to NVD
- 2026-01-21 - Last updated in NVD database
Technical Details for CVE-2026-22807
Vulnerability Analysis
This vulnerability is classified as CWE-94 (Improper Control of Generation of Code - Code Injection). The flaw resides in vLLM's model loading mechanism, specifically in how it handles Hugging Face's dynamic module system. When vLLM resolves a model, it processes the auto_map configuration from the model's config.json file, which can specify custom Python modules to load.
The vulnerable code path fails to check whether trust_remote_code is enabled before loading these dynamic modules. This means that even if a user has not explicitly trusted remote code execution, vLLM will still execute arbitrary Python code specified in the model configuration during the initialization phase.
The attack is particularly dangerous because it occurs at server startup—before any authentication, request handling, or security controls are in place. An attacker only needs to convince a victim to load a malicious model, either by:
- Hosting a malicious model on Hugging Face
- Placing a malicious model in a local directory that will be loaded
Root Cause
The root cause is insufficient access control during dynamic module loading. The trust_remote_code parameter, which is designed to gate the execution of untrusted Python code from model repositories, was not being passed through the call chain to the dynamic module resolution functions. This allowed the auto_map feature to load and execute arbitrary Python modules regardless of the user's security configuration.
Attack Vector
The attack vector is network-based, requiring user interaction to load a malicious model. An attacker can craft a malicious Hugging Face model repository containing:
- A config.json file with an auto_map entry pointing to a custom Python module
- A Python module file containing malicious code that executes during import
When the victim loads this model using vLLM, the malicious Python code executes with the privileges of the vLLM process, potentially allowing full system compromise.
The following patch demonstrates the fix applied in vLLM 0.14.0:
module,
model_config.model,
revision=model_config.revision,
+ trust_remote_code=model_config.trust_remote_code,
warn_on_fail=False,
)
Source: GitHub Commit
The fix ensures that the trust_remote_code parameter is properly propagated to the dynamic module loading functions:
# SPDX-FileCopyrightText: Copyright contributors to the vLLM project
import os
-from transformers.dynamic_module_utils import get_class_from_dynamic_module
+from transformers.dynamic_module_utils import (
+ get_class_from_dynamic_module,
+ resolve_trust_remote_code,
+)
import vllm.envs as envs
from vllm.logger import init_logger
Source: GitHub Commit
Detection Methods for CVE-2026-22807
Indicators of Compromise
- Unexpected Python process spawning during vLLM model loading
- Unusual network connections originating from vLLM server processes during startup
- Modified or suspicious config.json files in model directories containing auto_map entries
- Presence of unfamiliar Python modules in model repositories or local model directories
Detection Strategies
- Monitor vLLM startup logs for dynamic module loading from untrusted sources
- Implement file integrity monitoring on model directories to detect unauthorized modifications
- Use application-level logging to track which models are loaded and their sources
- Deploy network monitoring to identify unexpected outbound connections during model initialization
Monitoring Recommendations
- Enable verbose logging for vLLM model loading operations
- Implement alerts for model loads from non-whitelisted Hugging Face repositories
- Monitor process execution chains to detect suspicious child processes spawned by vLLM
- Audit trust_remote_code configuration settings across all vLLM deployments
How to Mitigate CVE-2026-22807
Immediate Actions Required
- Upgrade vLLM to version 0.14.0 or later immediately
- Audit all currently loaded models for suspicious auto_map configurations
- Only load models from trusted and verified sources until patching is complete
- Review and restrict trust_remote_code settings in existing deployments
Patch Information
The vulnerability has been fixed in vLLM version 0.14.0. The patch ensures that the trust_remote_code parameter is properly respected when loading dynamic modules from model configurations. Organizations should upgrade to the patched version immediately.
For detailed patch information, see:
Workarounds
- Only load models from trusted, verified sources (official Hugging Face repositories or internally vetted models)
- Implement network isolation for vLLM servers to limit the impact of potential code execution
- Run vLLM processes with minimal privileges using containerization or sandboxing
- Manually inspect config.json files for auto_map entries before loading any new models
# Configuration example
# Verify vLLM version before deployment
pip show vllm | grep Version
# Upgrade to patched version
pip install --upgrade vllm>=0.14.0
# Audit model configurations for auto_map entries
find /path/to/models -name "config.json" -exec grep -l "auto_map" {} \;
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.


