CVE-2024-11392 Overview
CVE-2024-11392 is an insecure deserialization vulnerability in Hugging Face Transformers that allows remote attackers to execute arbitrary code on affected installations. The vulnerability exists within the MobileViTV2 model's handling of configuration files, where improper validation of user-supplied data enables deserialization of untrusted data. User interaction is required to exploit this vulnerability—the target must visit a malicious page or open a malicious file. This vulnerability was tracked as ZDI-CAN-24322.
Critical Impact
Successful exploitation allows remote code execution in the context of the current user, potentially leading to complete system compromise, data theft, or lateral movement within an organization's infrastructure.
Affected Products
- Hugging Face Transformers (all versions prior to fix)
Discovery Timeline
- 2024-11-22 - CVE-2024-11392 published to NVD
- 2025-02-10 - Last updated in NVD database
Technical Details for CVE-2024-11392
Vulnerability Analysis
This vulnerability belongs to the CWE-502 (Deserialization of Untrusted Data) category and affects the MobileViTV2 component of Hugging Face Transformers. The flaw stems from the library's handling of model configuration files, which fails to properly validate user-supplied data before deserialization. When a user loads a maliciously crafted model or configuration file, the attacker-controlled serialized data is processed without adequate security checks, enabling arbitrary code execution.
The attack requires user interaction—either visiting a malicious web page or opening a malicious file—making it a targeted attack vector. Once exploited, the attacker gains code execution privileges equivalent to the current user, which in many machine learning environments may have access to sensitive training data, model weights, and cloud credentials.
Root Cause
The root cause of this vulnerability is the lack of proper validation and sanitization of user-supplied data within configuration file handlers. The MobileViTV2 model implementation deserializes configuration data without verifying its integrity or origin, allowing attackers to inject malicious serialized objects. This is a common pattern in Python-based machine learning frameworks where pickle or similar serialization formats are used without security hardening.
Attack Vector
The attack vector is network-based and requires user interaction. An attacker can craft a malicious model repository, configuration file, or web page that, when accessed by a victim using Hugging Face Transformers, triggers the deserialization vulnerability. The attack flow typically involves:
- Attacker creates a malicious model or configuration file containing a serialized payload
- Victim is social engineered into loading the malicious resource (via a fake model repository, phishing email, or compromised website)
- Upon loading, the Transformers library deserializes the attacker's payload
- Arbitrary code executes in the context of the victim's user account
The vulnerability is particularly dangerous in environments where users frequently download and experiment with community-contributed models from platforms like the Hugging Face Hub.
Detection Methods for CVE-2024-11392
Indicators of Compromise
- Unexpected network connections originating from Python processes running Transformers workloads
- Anomalous process spawning from Python interpreters loading ML models
- Unauthorized file system access or modifications following model loading operations
- Suspicious model configuration files containing obfuscated or encoded data
Detection Strategies
- Monitor for unusual deserialization activities in Python environments, particularly involving pickle, torch.load(), or similar serialization libraries
- Implement file integrity monitoring on model directories and configuration files
- Deploy endpoint detection and response (EDR) solutions to detect post-exploitation behaviors such as command execution or lateral movement
- Audit model loading operations and flag configurations from untrusted sources
Monitoring Recommendations
- Enable verbose logging for Hugging Face Transformers operations to capture model loading events
- Monitor outbound network traffic from machine learning workloads for unexpected connections
- Implement behavioral analysis for Python processes to detect anomalous execution patterns
- Set up alerts for new or modified files in model cache directories (~/.cache/huggingface/)
How to Mitigate CVE-2024-11392
Immediate Actions Required
- Only load models from trusted, verified sources on the Hugging Face Hub
- Audit and remove any untrusted or unverified models from your environment
- Implement network segmentation for machine learning workloads to limit blast radius
- Review access controls for systems running Transformers workloads
- Consider running model loading operations in sandboxed or containerized environments
Patch Information
Refer to the Zero Day Initiative Advisory ZDI-24-1513 for the latest patch information and vendor response. Users should update to the latest version of Hugging Face Transformers once a fix is available and monitor the official Hugging Face security channels for updates.
Workarounds
- Avoid loading models or configuration files from untrusted or unverified sources
- Implement strict allowlisting for permitted model repositories and sources
- Run Transformers workloads in isolated environments with limited privileges
- Use containerization with restricted capabilities to limit the impact of potential exploitation
- Disable automatic model downloading in production environments and manually verify models before deployment
# Configuration example: Running Transformers in a restricted container
docker run --rm -it \
--read-only \
--security-opt=no-new-privileges:true \
--cap-drop=ALL \
--network=none \
-v /path/to/verified/models:/models:ro \
your-ml-container python your_script.py
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.


