CVE-2026-34756 Overview
CVE-2026-34756 is a Denial of Service vulnerability affecting vLLM, a popular inference and serving engine for large language models (LLMs). The vulnerability exists in the OpenAI-compatible API server from version 0.1.0 to before 0.19.0. Due to the lack of upper bound validation on the n parameter in the ChatCompletionRequest and CompletionRequest Pydantic models, an unauthenticated attacker can send a single HTTP request with an astronomically large n value. This completely blocks the Python asyncio event loop and causes immediate Out-Of-Memory (OOM) crashes by allocating millions of request object copies in the heap before the request even reaches the scheduling queue.
Critical Impact
A single malicious HTTP request can completely crash vLLM API servers, causing service disruption for all users relying on the LLM inference service.
Affected Products
- vLLM versions 0.1.0 through 0.18.x
- vLLM OpenAI-compatible API server endpoints
- Systems exposing vLLM API endpoints without additional rate limiting or input validation
Discovery Timeline
- 2026-04-06 - CVE-2026-34756 published to NVD
- 2026-04-07 - Last updated in NVD database
Technical Details for CVE-2026-34756
Vulnerability Analysis
This vulnerability is classified under CWE-770 (Allocation of Resources Without Limits or Throttling). The vLLM OpenAI-compatible API server accepts completion requests that include an n parameter specifying how many completions to generate. The vulnerable code paths in both ChatCompletionRequest and CompletionRequest Pydantic models fail to enforce any maximum boundary on this parameter value.
When an attacker submits a request with an extremely large n value (potentially millions or billions), the server attempts to pre-allocate memory structures for each requested completion. This occurs synchronously within the Python asyncio event loop, meaning the entire server becomes unresponsive while attempting the allocation. The memory exhaustion typically triggers an Out-Of-Memory condition that crashes the entire vLLM process.
The attack is particularly dangerous because it can be executed by unauthenticated users, requires only a single HTTP request, and the malicious payload reaches memory allocation routines before any scheduling or rate-limiting mechanisms can intervene.
Root Cause
The root cause is missing input validation on the n parameter within the Pydantic request models. The ChatCompletionRequest and CompletionRequest classes accept the n field without defining a maximum constraint using Pydantic's Field validator with a le (less than or equal) parameter. This allows arbitrarily large integer values to be processed, leading to resource exhaustion during object instantiation.
Attack Vector
The attack is network-based and requires no authentication or special privileges. An attacker sends a crafted HTTP POST request to the vLLM OpenAI-compatible API endpoint (typically /v1/completions or /v1/chat/completions) with a JSON body containing an extremely large n value.
The malicious request triggers immediate memory allocation before reaching request scheduling or queueing mechanisms. Because the allocation happens within the asyncio event loop, the entire server becomes blocked, affecting all concurrent users. The server either crashes from OOM conditions or becomes completely unresponsive until manually restarted.
No verified code examples are available for this vulnerability. The attack involves sending a standard OpenAI-compatible API request with a maliciously large n parameter value. Technical details can be found in the GitHub Security Advisory.
Detection Methods for CVE-2026-34756
Indicators of Compromise
- Unexpected vLLM process crashes with Out-Of-Memory errors
- API server becoming unresponsive to all requests suddenly
- Log entries showing completion requests with abnormally large n parameter values
- Memory usage spikes followed by process termination
Detection Strategies
- Monitor incoming API requests for n parameter values exceeding reasonable thresholds (e.g., > 100)
- Implement web application firewall (WAF) rules to inspect and block JSON payloads with excessive n values
- Set up process monitoring to detect sudden vLLM crashes or restarts
- Review access logs for patterns of requests with unusual parameter combinations
Monitoring Recommendations
- Configure memory usage alerts for vLLM processes to detect allocation spikes before crashes
- Implement request logging that captures the full JSON payload for forensic analysis
- Set up automated service health checks with alerting for API endpoint availability
- Monitor asyncio event loop responsiveness metrics if available
How to Mitigate CVE-2026-34756
Immediate Actions Required
- Upgrade vLLM to version 0.19.0 or later immediately
- Implement API gateway or WAF rules to limit the n parameter to a reasonable maximum value
- Enable rate limiting on API endpoints to reduce impact of potential DoS attempts
- Consider requiring authentication for API access if not already implemented
Patch Information
The vulnerability is fixed in vLLM version 0.19.0. The fix adds proper upper bound validation on the n parameter in both ChatCompletionRequest and CompletionRequest Pydantic models. Organizations should upgrade to this version or later to remediate the vulnerability.
For detailed technical information about the fix, refer to the GitHub commit and pull request.
Workarounds
- Deploy a reverse proxy or API gateway that validates and limits the n parameter before requests reach vLLM
- Implement request body inspection at the load balancer level to reject requests with excessive n values
- Run vLLM instances with memory limits (e.g., using cgroups or container memory limits) to prevent system-wide impact
- Isolate vLLM deployments in containers with automatic restart policies to minimize downtime from crashes
# Example nginx configuration to limit n parameter
# Add to location block handling vLLM API requests
location /v1/ {
# Use lua-nginx-module to inspect JSON body
access_by_lua_block {
ngx.req.read_body()
local body = ngx.req.get_body_data()
if body then
local cjson = require "cjson"
local ok, data = pcall(cjson.decode, body)
if ok and data.n and data.n > 100 then
ngx.exit(400)
end
end
}
proxy_pass http://vllm_backend;
}
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.

