CVE-2026-24747 Overview
A critical vulnerability has been identified in PyTorch's weights_only unpickler that allows attackers to craft malicious checkpoint files (.pth) capable of corrupting memory and potentially achieving arbitrary code execution. When a victim loads a specially crafted checkpoint file using torch.load(..., weights_only=True), the vulnerable deserialization mechanism can be exploited to execute arbitrary code on the target system.
This vulnerability is particularly concerning in machine learning workflows where model checkpoints are frequently shared between researchers, downloaded from public repositories, or loaded from untrusted sources. The weights_only=True parameter was specifically designed as a security measure to prevent arbitrary code execution during model loading, making this bypass especially impactful.
Critical Impact
Attackers can achieve arbitrary code execution by convincing users to load malicious PyTorch checkpoint files, bypassing the security protections of the weights_only unpickler.
Affected Products
- PyTorch versions prior to 2.10.0
- Applications using torch.load() with weights_only=True parameter
- Machine learning pipelines that load untrusted model checkpoints
Discovery Timeline
- 2026-01-27 - CVE CVE-2026-24747 published to NVD
- 2026-01-29 - Last updated in NVD database
Technical Details for CVE-2026-24747
Vulnerability Analysis
This vulnerability falls under CWE-94 (Improper Control of Generation of Code - Code Injection) and affects PyTorch's model serialization and deserialization functionality. The core issue resides in the weights_only unpickler implementation, which was intended to provide a safer alternative to standard pickle deserialization by restricting the types of objects that can be loaded.
The weights_only mode was introduced as a security feature to mitigate the well-known risks of Python's pickle module, which can execute arbitrary code during deserialization. However, a flaw in the implementation allows attackers to craft checkpoint files that bypass these restrictions, leading to memory corruption that can be leveraged for arbitrary code execution.
The vulnerability requires user interaction—specifically, the victim must load a malicious .pth file. However, this is a common operation in machine learning workflows, particularly when downloading pre-trained models from public sources like Hugging Face, GitHub repositories, or model zoos.
Root Cause
The root cause of this vulnerability lies in improper validation and handling within PyTorch's restricted unpickler implementation. The weights_only unpickler fails to properly sanitize certain pickle opcodes or object types during deserialization, allowing specially constructed payloads to corrupt memory structures. This memory corruption can then be weaponized to achieve code execution by manipulating program control flow or overwriting critical data structures.
Attack Vector
The attack vector is network-based, requiring an attacker to distribute a malicious checkpoint file that a victim subsequently downloads and loads. Attack scenarios include:
- Supply Chain Attacks: Compromising model repositories or injecting malicious models into public collections
- Social Engineering: Sharing malicious checkpoint files through forums, emails, or collaboration platforms
- Man-in-the-Middle: Intercepting model downloads and replacing legitimate checkpoints with malicious versions
The exploitation flow involves creating a specially crafted .pth file containing pickle opcodes that trigger the memory corruption vulnerability when processed by the weights_only unpickler. When a user loads this file using torch.load(path, weights_only=True), the malicious payload executes during the deserialization process.
For detailed technical information about the vulnerability mechanism and exploitation techniques, refer to the GitHub Security Advisory GHSA-63cw-57p8-fm3p and the related GitHub Issue #163105.
Detection Methods for CVE-2026-24747
Indicators of Compromise
- Unexpected process spawning or network connections originating from Python/PyTorch processes during model loading operations
- Anomalous system calls or file system access patterns when loading .pth checkpoint files
- Presence of suspicious or unfamiliar .pth files in model directories, especially those with recent modification timestamps
- Memory corruption signatures or segmentation faults during torch.load() operations
Detection Strategies
- Implement file integrity monitoring on directories containing model checkpoints to detect unauthorized modifications
- Monitor PyTorch model loading operations for unusual execution patterns using endpoint detection and response (EDR) solutions
- Deploy network monitoring to identify downloads of checkpoint files from untrusted or suspicious sources
- Use application-level logging to track all torch.load() calls and their source file paths
Monitoring Recommendations
- Enable verbose logging in machine learning pipelines to capture model loading events with full file paths and source origins
- Implement runtime monitoring for PyTorch applications that alerts on unexpected child process creation or network activity
- Establish baseline behavior for model loading operations and alert on deviations
- Monitor for attempts to load checkpoint files from temporary directories or unusual locations
How to Mitigate CVE-2026-24747
Immediate Actions Required
- Upgrade PyTorch to version 2.10.0 or later immediately across all environments
- Audit all model checkpoints currently in use and verify their integrity against known-good hashes
- Implement strict controls on model checkpoint sources, allowing only downloads from verified and trusted repositories
- Review and restrict file system permissions for directories containing model files
Patch Information
PyTorch version 2.10.0 addresses this vulnerability with a fix to the weights_only unpickler implementation. The fix is available in the PyTorch v2.10.0 release. The specific commit addressing this issue can be found in the PyTorch commit 954dc5183ee9205cbe79876ad05dd2d9ae752139.
Organizations should prioritize upgrading to the patched version, especially in environments that process model checkpoints from external sources or untrusted parties.
Workarounds
- Avoid loading checkpoint files from untrusted or unverified sources until the patch can be applied
- Implement cryptographic verification (hash checking) for all checkpoint files before loading
- Use isolated environments (containers, sandboxes) when loading checkpoint files from external sources to limit potential impact
- Consider implementing additional deserialization restrictions or custom unpickler overrides as a defense-in-depth measure
# Upgrade PyTorch to patched version
pip install --upgrade torch>=2.10.0
# Verify PyTorch version after upgrade
python -c "import torch; print(f'PyTorch version: {torch.__version__}')"
# Verify checkpoint file integrity before loading (example)
sha256sum model_checkpoint.pth
# Compare output against known-good hash from trusted source
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.

