CVE-2026-1839 Overview
A vulnerability in the HuggingFace Transformers library, specifically in the Trainer class, allows for arbitrary code execution through insecure deserialization. The _load_rng_state() method in src/transformers/trainer.py at line 3059 calls torch.load() without the weights_only=True parameter. This issue affects all versions of the library supporting torch>=2.2 when used with PyTorch versions below 2.6, as the safe_globals() context manager provides no protection in these versions. An attacker can exploit this vulnerability by supplying a malicious checkpoint file, such as rng_state.pth, which can execute arbitrary code when loaded.
Critical Impact
Attackers can achieve arbitrary code execution by providing a malicious PyTorch checkpoint file, potentially leading to complete system compromise in machine learning training environments.
Affected Products
- HuggingFace Transformers library (versions prior to v5.0.0rc3)
- Systems using torch>=2.2 with PyTorch versions below 2.6
- Applications utilizing the Trainer class for model training
Discovery Timeline
- 2026-04-07 - CVE CVE-2026-1839 published to NVD
- 2026-04-07 - Last updated in NVD database
Technical Details for CVE-2026-1839
Vulnerability Analysis
This vulnerability stems from insecure deserialization (CWE-502) in the HuggingFace Transformers library's checkpoint loading mechanism. The _load_rng_state() method uses PyTorch's torch.load() function to deserialize random number generator state files without enabling safe loading restrictions.
PyTorch's torch.load() function by default uses Python's pickle module for deserialization, which is inherently unsafe when processing untrusted data. The pickle protocol allows arbitrary Python code execution during the deserialization process, making any unsanitized torch.load() call a potential code execution vector.
While the code wraps the call in a safe_globals() context manager, this protection mechanism is ineffective for PyTorch versions below 2.6. The vulnerability requires local access and user interaction to trigger, as the attacker must convince a victim to load a malicious checkpoint file.
Root Cause
The root cause is the unsafe usage of torch.load() without the weights_only=True parameter in the _load_rng_state() method. When loading checkpoint files such as rng_state.pth, the function deserializes the entire pickle payload, which can contain malicious code objects. The safe_globals() context manager was intended to provide protection but fails to do so on PyTorch versions below 2.6, leaving a significant attack surface for environments using older PyTorch releases.
Attack Vector
The attack requires local access and involves social engineering or supply chain compromise. An attacker can craft a malicious rng_state.pth file containing arbitrary Python code embedded in the pickle payload. When a victim loads this checkpoint—either by resuming training from a compromised checkpoint directory or by downloading a malicious pre-trained model—the code executes with the privileges of the training process.
The following patch demonstrates the security fix applied in version v5.0.0rc3:
return
with safe_globals():
- checkpoint_rng_state = torch.load(rng_file)
+ check_torch_load_is_safe()
+ checkpoint_rng_state = torch.load(rng_file, weights_only=True)
random.setstate(checkpoint_rng_state["python"])
np.random.set_state(checkpoint_rng_state["numpy"])
torch.random.set_rng_state(checkpoint_rng_state["cpu"])
Source: GitHub Commit Update
Detection Methods for CVE-2026-1839
Indicators of Compromise
- Unexpected rng_state.pth or similar checkpoint files appearing in training directories
- Unusual process spawning or network connections originating from Python training processes
- Modified or tampered checkpoint files with unexpected file sizes or creation timestamps
Detection Strategies
- Monitor for calls to torch.load() without the weights_only=True parameter in application logs
- Implement file integrity monitoring on checkpoint directories to detect unauthorized modifications
- Use static code analysis tools to identify insecure torch.load() usage patterns in Python codebases
- Deploy endpoint detection to alert on suspicious child process creation from ML training workloads
Monitoring Recommendations
- Enable verbose logging for the HuggingFace Transformers library during training operations
- Implement checksum verification for all checkpoint files before loading
- Monitor ML training environments for unexpected file system modifications in checkpoint directories
- Review training job outputs for anomalous behaviors such as unauthorized network connections
How to Mitigate CVE-2026-1839
Immediate Actions Required
- Upgrade HuggingFace Transformers to version v5.0.0rc3 or later
- Upgrade PyTorch to version 2.6 or later to enable effective safe_globals() protection
- Audit existing checkpoint files for potential tampering before resuming any training jobs
- Implement strict source validation for all pre-trained models and checkpoints
Patch Information
The vulnerability is resolved in HuggingFace Transformers version v5.0.0rc3. The fix adds the weights_only=True parameter to the torch.load() call and introduces additional safety checks via the check_torch_load_is_safe() function. Organizations should update to this version or later immediately.
For detailed patch information, refer to the GitHub Commit Update. Additional vulnerability details are available at the Huntr Bug Bounty Listing.
Workarounds
- Manually patch the _load_rng_state() method to add weights_only=True to the torch.load() call
- Avoid loading checkpoint files from untrusted sources until the library is updated
- Isolate ML training environments in sandboxed containers with restricted permissions
- Disable checkpoint resumption functionality if not required for operations
# Upgrade HuggingFace Transformers to the patched version
pip install --upgrade transformers>=5.0.0rc3
# Upgrade PyTorch to version 2.6 or later for additional protections
pip install --upgrade torch>=2.6
# Verify installed versions
pip show transformers torch | grep -E "^(Name|Version)"
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.


