CVE-2026-7141 Overview
A vulnerability was found in vLLM up to version 0.19.0 affecting the has_mamba_layers function within the file vllm/v1/kv_cache_interface.py of the KV Block Handler component. This uninitialized resource vulnerability (CWE-908) allows recycled KV cache blocks to hold stale key/value data from prior requests, where partial-block tail slots can leak NaN/Inf values into masked softmax operations.
Critical Impact
Recycled KV cache blocks containing stale data from prior requests may cause data leakage or undefined behavior in FullAttention models when NaN/Inf values propagate through softmax computations.
Affected Products
- vLLM versions up to and including 0.19.0
- Systems using FullAttention models with KV cache recycling
- AI/ML inference deployments utilizing vLLM's v1 architecture
Discovery Timeline
- 2026-04-27 - CVE CVE-2026-7141 published to NVD
- 2026-04-29 - Last updated in NVD database
Technical Details for CVE-2026-7141
Vulnerability Analysis
This vulnerability stems from an uninitialized resource condition in vLLM's KV cache management system. The needs_kv_cache_zeroing property in kv_cache_interface.py was originally designed to only zero recycled KV blocks when Mamba layers were present. However, this logic failed to account for FullAttention models, which also require zeroing of recycled blocks to prevent stale data from contaminating subsequent inference requests.
When recycled KV cache blocks are reused without proper initialization, they can retain key/value tensors from previous requests. In FullAttention models specifically, partial-block tail slots containing residual NaN or Inf values can propagate through the masked softmax computation, potentially causing numerical instability, incorrect inference results, or information disclosure between requests.
The attack is considered to have high complexity and is described as difficult to exploit. The vulnerability requires network access and the ability to trigger specific model inference patterns that would cause the stale cache data to be utilized in a meaningful way.
Root Cause
The root cause lies in the incomplete conditional logic for determining when KV cache blocks require zeroing. The original implementation checked only for Mamba layer presence (self.has_mamba_layers) when deciding whether to zero recycled blocks. This check was insufficient because FullAttention models with FullAttentionSpec cache groups also require proper initialization of recycled KV blocks to prevent stale data leakage. The fix expands the needs_kv_cache_zeroing property to include both Mamba layers and any FullAttention specification groups.
Attack Vector
The vulnerability is exploitable remotely via network access. An attacker would need to manipulate the inference request patterns to cause KV cache block recycling scenarios where stale data from a previous request could influence subsequent inference operations. The attack complexity is considered high as it requires:
- Knowledge of the vLLM deployment configuration and model architecture
- Ability to send inference requests that trigger specific cache recycling patterns
- Timing and sequencing of requests to exploit the stale cache condition
@property
def needs_kv_cache_zeroing(self) -> bool:
- return self.has_mamba_layers
+ # Recycled blocks may hold stale K/V from prior requests; partial-block
+ # tail slots can leak NaN/Inf into masked softmax (see #39146).
+ return self.has_mamba_layers or any(
+ type(g.kv_cache_spec) is FullAttentionSpec for g in self.kv_cache_groups
+ )
Source: GitHub Commit Update
Detection Methods for CVE-2026-7141
Indicators of Compromise
- Unexpected NaN or Inf values appearing in model inference outputs
- Inconsistent or anomalous inference results when processing sequential requests
- Memory analysis revealing stale tensor data in recycled KV cache blocks
- Unusual patterns in inference request timing that could indicate exploitation attempts
Detection Strategies
- Monitor vLLM inference logs for numerical instability warnings or NaN/Inf errors in softmax operations
- Implement tensor validation checks to detect propagation of uninitialized or stale values in KV cache
- Review vLLM deployment configurations to identify FullAttention models that may be vulnerable
- Audit request patterns for suspicious sequences that could trigger cache recycling exploitation
Monitoring Recommendations
- Enable detailed logging for KV cache allocation and deallocation events
- Implement alerts for inference outputs containing NaN or Inf values
- Monitor memory utilization patterns for anomalies in KV block recycling behavior
- Deploy application-level monitoring to track inference request patterns and timing
How to Mitigate CVE-2026-7141
Immediate Actions Required
- Update vLLM to a patched version containing commit 1ad67864c0c20f167929e64c875f5c28e1aad9fd
- Review all deployed models to identify those using FullAttention specifications
- Implement input validation and output sanitization for inference requests
- Consider temporarily disabling KV cache recycling in critical deployments until patched
Patch Information
The vulnerability has been addressed in commit 1ad67864c0c20f167929e64c875f5c28e1aad9fd. The patch modifies the needs_kv_cache_zeroing property in vllm/v1/kv_cache_interface.py to include FullAttention models in the zeroing requirement. Organizations should apply this patch or upgrade to a vLLM version that includes this fix. For detailed information, refer to the GitHub Pull Request and the GitHub Issue Discussion.
Workarounds
- Disable KV cache block recycling if operationally feasible (may impact performance)
- Implement custom middleware to manually zero KV cache blocks before reuse
- Isolate inference requests to prevent cross-request data contamination
- Apply network-level controls to limit inference request patterns from untrusted sources
# Configuration example
# Verify vLLM version and check for vulnerability
pip show vllm | grep Version
# Apply the patch by updating to a patched version
pip install --upgrade vllm
# Alternatively, apply the specific commit if building from source
git fetch origin
git cherry-pick 1ad67864c0c20f167929e64c875f5c28e1aad9fd
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.


