CVE-2025-71225 Overview
A race condition vulnerability has been identified in the Linux kernel's MD (Multiple Devices) RAID1 subsystem. The vulnerability exists in the raid1_reshape() function where improper synchronization between array freeze operations and I/O error handling can lead to out-of-bounds memory access and memory pool corruption. When updating raid_disks via sysfs, the freeze_array() function may unblock prematurely while queued r1bios allocated with old raid_disks values are still pending release.
Critical Impact
Exploitation of this vulnerability could lead to out-of-bounds memory access in put_all_bios() and memory pool corruption, potentially causing system instability, denial of service, or kernel memory corruption on systems utilizing MD RAID1 arrays.
Affected Products
- Linux Kernel (MD RAID1 subsystem)
- Systems utilizing software RAID1 arrays managed via sysfs
- Linux distributions with affected kernel versions
Discovery Timeline
- 2026-02-18 - CVE CVE-2025-71225 published to NVD
- 2026-02-18 - Last updated in NVD database
Technical Details for CVE-2025-71225
Vulnerability Analysis
The vulnerability stems from a race condition in the Linux kernel's MD RAID1 implementation during array reshape operations. When raid1_reshape() is invoked, it calls freeze_array() before modifying the r1bio memory pool (conf->r1bio_pool) and conf->raid_disks, with unfreeze_array() called after the update completes. However, the synchronization mechanism is insufficient because freeze_array() only waits until nr_sync_pending and the difference between nr_pending and nr_queued across all buckets reaches zero.
The problem manifests when I/O errors occur during this window. When an I/O error happens, nr_queued is incremented and the corresponding r1bio structure is queued to either retry_list or bio_end_io_list. This allows freeze_array() to return before these queued r1bios are properly released, creating a dangerous race condition.
Root Cause
The root cause is improper synchronization in the array freeze mechanism. The freeze_array() function does not account for r1bios that have been queued due to I/O errors but not yet released. This oversight means that when conf->raid_disks and the memory pool are updated, there may still be outstanding r1bios allocated with the old raid_disks value. When free_r1bio() is subsequently called on these stale structures, it accesses memory out of bounds in put_all_bios() and returns incorrectly sized r1bios to the new memory pool.
Attack Vector
The vulnerability can be triggered through the sysfs interface when updating raid_disks on an active RAID1 array, particularly when I/O errors occur simultaneously. While updating raid_disks via ioctl SET_ARRAY_INFO already suspends the array, the sysfs path did not implement this protection, leaving a window for exploitation.
The attack scenario involves:
- An attacker or normal system operation triggers raid_disks update via sysfs on a RAID1 array
- Concurrent I/O operations encounter errors, causing r1bios to be queued
- freeze_array() returns prematurely while queued r1bios are pending
- The memory pool and raid_disks configuration are updated
- When the stale r1bios are freed, out-of-bounds memory access occurs
This vulnerability requires local access and the ability to manipulate RAID array configurations, limiting the attack surface to privileged users or processes with sysfs access to MD device configurations.
Detection Methods for CVE-2025-71225
Indicators of Compromise
- Kernel panic or oops messages referencing free_r1bio(), put_all_bios(), or MD RAID functions
- Slab corruption warnings or memory pool errors in kernel logs related to r1bio_pool
- Unexpected system crashes or hangs when modifying RAID1 array configurations
- Memory corruption indicators in dmesg output during RAID reshape operations
Detection Strategies
- Monitor kernel logs for memory corruption messages involving MD RAID subsystem functions
- Implement audit rules to track sysfs writes to /sys/block/md*/md/raid_disks
- Deploy kernel tracing to detect concurrent I/O errors during array reshape operations
- Use memory debugging tools like KASAN to detect out-of-bounds access in kernel memory
Monitoring Recommendations
- Enable kernel memory debugging features in development and test environments
- Monitor system logs for MD RAID-related errors during configuration changes
- Implement alerting for unexpected RAID array state changes or degradation
- Track kernel module and subsystem errors that may indicate memory corruption
How to Mitigate CVE-2025-71225
Immediate Actions Required
- Apply the kernel patches provided in the referenced git commits
- Use ioctl SET_ARRAY_INFO instead of sysfs for updating raid_disks as it already implements proper array suspension
- Avoid modifying raid_disks on active arrays during high I/O load periods
- Schedule RAID configuration changes during maintenance windows with reduced I/O activity
Patch Information
The Linux kernel maintainers have released patches that suspend the array when updating raid_disks via sysfs, aligning the behavior with the ioctl SET_ARRAY_INFO path. The fix ensures that normal I/O operations that might increase nr_queued during I/O errors are properly blocked before the memory pool and configuration are modified.
Relevant patches are available at:
Workarounds
- Use ioctl interface via mdadm for RAID configuration changes instead of direct sysfs manipulation
- Implement administrative controls to prevent concurrent I/O and configuration changes on MD arrays
- Consider temporarily disabling sysfs-based RAID management on production systems until patched
- Quiesce I/O to the array before making any configuration changes via sysfs
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.


