CVE-2024-11393 Overview
CVE-2024-11393 is a critical insecure deserialization vulnerability affecting Hugging Face Transformers, specifically in the MaskFormer model file parsing functionality. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. The flaw stems from the lack of proper validation of user-supplied data during model file parsing, which can result in deserialization of untrusted data. User interaction is required to exploit this vulnerability, as the target must visit a malicious page or open a malicious model file.
Critical Impact
Remote code execution via insecure deserialization in MaskFormer model files allows attackers to execute arbitrary code in the context of the current user, potentially leading to complete system compromise.
Affected Products
- Hugging Face Transformers (all versions prior to patch)
- MaskFormer model implementations within the Transformers library
- Applications and pipelines utilizing Hugging Face Transformers for model loading
Discovery Timeline
- 2024-11-22 - CVE-2024-11393 published to NVD
- 2025-02-10 - Last updated in NVD database
Technical Details for CVE-2024-11393
Vulnerability Analysis
This vulnerability is classified as CWE-502 (Deserialization of Untrusted Data). The flaw exists within the parsing mechanism for MaskFormer model files in the Hugging Face Transformers library. When processing model files, the library fails to properly validate user-supplied data before deserialization, creating an opportunity for attackers to inject malicious serialized objects that execute arbitrary code upon deserialization.
The vulnerability requires user interaction for exploitation. An attacker must craft a malicious model file and convince a user to load it, either by visiting a malicious page hosting the model or by directly opening a malicious file. Once the malicious model is loaded, the deserialization process triggers arbitrary code execution in the context of the current user.
Root Cause
The root cause of CVE-2024-11393 is the absence of proper input validation and sanitization when deserializing model file data. The MaskFormer model parser trusts user-supplied data without verifying its integrity or safety, allowing attackers to embed malicious payloads within model files that execute during the deserialization process.
This is a common pattern in machine learning frameworks where model files often contain serialized Python objects (such as through pickle) that can execute arbitrary code when loaded. The lack of sandboxing or validation of these serialized objects creates a significant attack surface.
Attack Vector
The attack vector is network-based and requires user interaction. An attacker can exploit this vulnerability through several methods:
- Malicious Model Hosting: Publishing a malicious model file to a public repository or website where users might download and load it
- Social Engineering: Convincing users to load a malicious model file through phishing or other social engineering techniques
- Supply Chain Attack: Compromising a legitimate model repository to inject malicious model files
When a victim loads the malicious model file using the Hugging Face Transformers library, the insecure deserialization process executes the attacker's payload, granting arbitrary code execution with the privileges of the current user.
The vulnerability was tracked by the Zero Day Initiative as ZDI-CAN-25191 and published as ZDI-24-1514.
Detection Methods for CVE-2024-11393
Indicators of Compromise
- Unexpected network connections originating from Python processes running Transformers
- Anomalous file system activity during model loading operations
- Unusual child processes spawned by applications using Hugging Face Transformers
- Suspicious model files with unexpected embedded objects or payloads
Detection Strategies
- Monitor for process execution chains where Python processes spawn unexpected child processes during model loading
- Implement file integrity monitoring for model files stored locally
- Use application-level logging to track model file sources and loading operations
- Deploy endpoint detection rules for known deserialization exploit patterns
Monitoring Recommendations
- Enable verbose logging for Hugging Face Transformers model loading operations
- Monitor network traffic for downloads of model files from untrusted sources
- Implement alerting for unexpected code execution patterns during ML pipeline operations
- Review audit logs for model file access from unusual locations or at unusual times
How to Mitigate CVE-2024-11393
Immediate Actions Required
- Only load model files from trusted and verified sources such as the official Hugging Face Hub
- Implement strict access controls on model file directories and repositories
- Review and audit all model files currently in use within your environment
- Consider running model loading operations in isolated sandbox environments
Patch Information
Organizations should monitor the Hugging Face Transformers project for security updates addressing this vulnerability. The Zero Day Initiative advisory ZDI-24-1514 provides additional details on the vulnerability disclosure. Users should update to the latest version of Hugging Face Transformers once a patch becomes available.
Workarounds
- Avoid loading model files from untrusted or unverified sources
- Implement network segmentation to isolate ML workloads from sensitive systems
- Use containerization or virtual environments to limit the impact of potential code execution
- Deploy SentinelOne endpoint protection to detect and prevent exploitation attempts
# Configuration example - Restrict model loading to trusted sources only
# Set environment variable to use only local cache and prevent automatic downloads
export HF_HUB_OFFLINE=1
# Alternatively, configure trusted endpoints only
export HF_ENDPOINT="https://huggingface.co"
# Verify model checksums before loading
# Use Hugging Face Hub's built-in verification
python -c "from huggingface_hub import scan_cache_dir; print(scan_cache_dir())"
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.


