CVE-2024-24590 Overview
CVE-2024-24590 is an insecure deserialization vulnerability affecting Allegro AI's ClearML platform client SDK. The flaw allows attackers to craft malicious artifacts that, when downloaded and processed by users, can execute arbitrary code on their systems. This vulnerability is particularly concerning in MLOps environments where artifact sharing between team members is a common practice.
Critical Impact
Malicious actors can leverage this vulnerability to execute arbitrary code on end-user systems through crafted artifacts, potentially compromising machine learning pipelines and sensitive training data.
Affected Products
- ClearML Client SDK versions 0.17.0 to 1.14.2
- Allegro AI ClearML platform deployments using vulnerable SDK versions
Discovery Timeline
- 2024-02-06 - CVE-2024-24590 published to NVD
- 2024-11-21 - Last updated in NVD database
Technical Details for CVE-2024-24590
Vulnerability Analysis
This vulnerability stems from CWE-502 (Deserialization of Untrusted Data) within the ClearML client SDK. The ClearML platform is widely used in MLOps workflows for experiment tracking, model management, and artifact storage. The SDK's artifact handling mechanism fails to properly validate serialized data before deserialization, creating an opportunity for code execution.
In Python-based ML environments, pickle-based serialization is commonly used for artifact storage. When the ClearML SDK retrieves and deserializes artifacts without proper validation, an attacker who can upload a malicious artifact to the platform can achieve arbitrary code execution on any client that interacts with that artifact.
Root Cause
The root cause is improper handling of untrusted serialized data in the artifact retrieval and loading process. The SDK versions 0.17.0 through 1.14.2 do not implement sufficient validation or sandboxing when deserializing artifacts, allowing maliciously crafted payloads to execute code during the deserialization process. This is a common anti-pattern in Python applications that use pickle or similar serialization libraries without proper security controls.
Attack Vector
The attack requires network access and relies on user interaction. An attacker can exploit this vulnerability by:
- Uploading a maliciously crafted artifact to a ClearML server that the target has access to
- Naming or positioning the artifact in a way that encourages victim interaction
- Waiting for a victim to download or interact with the artifact using a vulnerable SDK version
- Achieving code execution when the SDK deserializes the malicious payload
This attack is particularly effective in collaborative ML environments where researchers commonly share artifacts, datasets, and model checkpoints. The artifact could masquerade as a legitimate model or dataset while containing a payload that executes upon deserialization.
Detection Methods for CVE-2024-24590
Indicators of Compromise
- Unexpected outbound network connections from systems running ClearML SDK
- Unusual processes spawned by Python interpreters running ClearML workflows
- Anomalous file system activity following artifact downloads from ClearML servers
- Suspicious pickle files or serialized objects in ClearML artifact directories
Detection Strategies
- Monitor for unusual process execution chains originating from Python processes using the ClearML SDK
- Implement file integrity monitoring on systems that interact with ClearML artifacts
- Deploy endpoint detection rules that alert on suspicious deserialization patterns in ML pipelines
- Review ClearML server logs for artifact upload activity from unauthorized or suspicious sources
Monitoring Recommendations
- Enable comprehensive logging for all ClearML SDK operations, particularly artifact downloads
- Implement network segmentation to limit blast radius if ML development systems are compromised
- Establish baseline behavior for ML workflows to detect anomalous execution patterns
- Monitor for indicators of supply chain compromise in shared ML artifact repositories
How to Mitigate CVE-2024-24590
Immediate Actions Required
- Upgrade ClearML client SDK to a version newer than 1.14.2 that contains the security fix
- Audit all artifacts currently stored in ClearML servers for signs of tampering
- Restrict artifact upload permissions to trusted users and service accounts
- Implement network segmentation between ML development environments and production systems
Patch Information
Organizations should upgrade to a patched version of the ClearML client SDK that addresses the insecure deserialization vulnerability. Review the HiddenLayer research publication for detailed technical analysis and remediation guidance. Contact Allegro AI for specific patch version information and upgrade procedures.
Workarounds
- Implement strict access controls on who can upload artifacts to shared ClearML servers
- Review and validate the source of all artifacts before downloading or using them in workflows
- Consider running ClearML SDK operations in isolated environments such as containers or VMs
- Establish artifact signing and verification processes to ensure integrity of shared ML artifacts
# Upgrade ClearML SDK to patched version
pip install --upgrade clearml
# Verify installed version
pip show clearml | grep Version
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.


