CVE-2026-25960 Overview
CVE-2026-25960 is a Server-Side Request Forgery (SSRF) protection bypass vulnerability affecting vLLM, an inference and serving engine for large language models (LLMs). This vulnerability allows attackers to circumvent the SSRF protection fix introduced in version 0.15.1 for CVE-2026-24779 through inconsistent URL parsing behavior between the validation layer and the actual HTTP client in the load_from_url_async method.
The core issue stems from a mismatch in URL parsing libraries: the SSRF fix uses urllib3.util.parse_url() for hostname validation, while the actual HTTP requests are made using aiohttp, which internally relies on the yarl library for URL parsing. This parsing inconsistency creates an exploitable gap that attackers can leverage to bypass security controls.
Critical Impact
Attackers can bypass SSRF protections to access internal network resources, potentially exposing sensitive data from internal services, cloud metadata endpoints, or other restricted network locations accessible from the vLLM server.
Affected Products
- vLLM version 0.17.0
- vLLM versions prior to the security patch with the SSRF fix from CVE-2026-24779
Discovery Timeline
- 2026-03-09 - CVE-2026-25960 published to NVD
- 2026-03-11 - Last updated in NVD database
Technical Details for CVE-2026-25960
Vulnerability Analysis
This SSRF bypass vulnerability (CWE-918) exploits a fundamental architectural weakness in how URL validation is performed versus how URLs are actually processed. The vulnerability exists because two different URL parsing libraries interpret the same URL string differently, creating a security gap.
When a user supplies a URL to the load_from_url_async method, the application uses urllib3.util.parse_url() to extract and validate the hostname against a blocklist of internal addresses. However, the actual HTTP request is performed using aiohttp, which uses the yarl library for URL parsing. Due to differences in how these libraries handle certain URL edge cases (such as backslash-@ parsing), an attacker can craft a URL that passes validation but resolves to a different, potentially restricted host when processed by aiohttp.
This type of parser differential vulnerability is particularly dangerous because the security check appears to function correctly when tested with the same parser, making it difficult to detect during standard security audits.
Root Cause
The root cause is the use of inconsistent URL parsing implementations between the security validation layer (urllib3.util.parse_url()) and the HTTP client layer (aiohttp with yarl). Different URL parsers may interpret ambiguous or malformed URL components differently, allowing specially crafted URLs to appear safe during validation but resolve to internal hosts during actual request execution.
Attack Vector
The attack vector is network-based and requires low-privileged access. An authenticated attacker can submit maliciously crafted URLs to the vLLM service that exploit the parsing differential between urllib3 and yarl. The crafted URL bypasses the hostname validation check but is interpreted differently by aiohttp, allowing the attacker to force the server to make requests to internal network resources.
This could enable:
- Access to cloud instance metadata services (e.g., http://169.254.169.254/)
- Reconnaissance of internal network services
- Data exfiltration from internal APIs
- Port scanning of internal infrastructure
# Security patch applied in vllm/multimodal/media/connector.py
# The fix ensures the validated URL specification is used for the actual request
# instead of the raw user-supplied URL
connection = self.connection
data = connection.get_bytes(
- url,
+ url_spec.url,
timeout=fetch_timeout,
allow_redirects=envs.VLLM_MEDIA_URL_ALLOW_REDIRECTS,
)
Source: GitHub Commit Change
Detection Methods for CVE-2026-25960
Indicators of Compromise
- Unusual outbound HTTP requests from vLLM servers to internal IP ranges (10.x.x.x, 172.16.x.x-172.31.x.x, 192.168.x.x)
- Requests to cloud metadata endpoints such as 169.254.169.254 or cloud-specific metadata URLs
- URL parameters containing backslash characters followed by @ symbols in API requests
- Unexpected connections to internal services that should not be accessed by the vLLM application
Detection Strategies
- Implement network monitoring to detect outbound connections from vLLM instances to internal network ranges
- Enable detailed logging of all URLs processed by the load_from_url_async method and analyze for parsing anomalies
- Deploy web application firewall (WAF) rules to detect URLs with unusual character sequences that may indicate parser exploitation attempts
- Monitor for connections to known cloud metadata endpoints from application servers
Monitoring Recommendations
- Configure alerting for any outbound connections from vLLM servers to RFC 1918 private address ranges
- Implement egress filtering and logging to track all external HTTP requests made by the application
- Use SentinelOne Singularity Platform to monitor for anomalous network behavior patterns indicative of SSRF exploitation
- Review application logs for failed validation attempts that may indicate reconnaissance activity
How to Mitigate CVE-2026-25960
Immediate Actions Required
- Update vLLM to the patched version that includes the fix from GitHub Pull Request #34743
- Review the GitHub Security Advisory GHSA-qh4c-xf7m-gxfc for detailed remediation guidance
- Implement network-level controls to restrict outbound connections from vLLM servers to only necessary external hosts
- Audit logs for any evidence of exploitation attempts targeting the vulnerable endpoint
Patch Information
The vulnerability is addressed in the commit 6f3b2047abd4a748e3db4a68543f8221358002c0. The fix modifies vllm/multimodal/media/connector.py to use the validated URL specification (url_spec.url) instead of the raw user-supplied URL when making HTTP requests. This ensures that the same URL representation used during validation is also used for the actual network request, eliminating the parser differential vulnerability.
For detailed patch information, refer to the GitHub Commit Change and GitHub Security Advisory GHSA-v359-jj2v-j536.
Workarounds
- Implement strict egress firewall rules to block outbound connections to internal IP ranges and cloud metadata endpoints from vLLM servers
- Deploy a reverse proxy with URL normalization capabilities in front of the vLLM service to sanitize incoming URLs before they reach the application
- Disable or restrict access to the URL loading functionality if not required for your deployment
- Use network segmentation to isolate vLLM instances from sensitive internal services
# Example iptables rules to restrict outbound connections from vLLM
# Block connections to private IP ranges
iptables -A OUTPUT -d 10.0.0.0/8 -j DROP
iptables -A OUTPUT -d 172.16.0.0/12 -j DROP
iptables -A OUTPUT -d 192.168.0.0/16 -j DROP
# Block cloud metadata endpoints
iptables -A OUTPUT -d 169.254.169.254 -j DROP
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.


