CVE-2025-49838 Overview
CVE-2025-49838 is an unsafe deserialization vulnerability discovered in GPT-SoVITS-WebUI, a popular voice conversion and text-to-speech web interface. The vulnerability exists in the AudioPreDeEcho class within vr.py, where user-controlled input is passed directly to torch.load() without proper validation. This allows an attacker to craft a malicious model file that, when loaded, can execute arbitrary code on the target system.
The vulnerability flow begins when the model_choose variable accepts user input representing a path to a model file. This input is passed to the uvr function, which creates a new instance of the AudioPreDeEcho class with the model_path attribute containing the user-supplied path (with .pth extension automatically appended). The AudioPreDeEcho class then uses torch.load to deserialize the model at the specified path, enabling unsafe deserialization attacks.
Critical Impact
Attackers can achieve remote code execution by supplying a malicious PyTorch model file, potentially leading to complete system compromise of servers running GPT-SoVITS-WebUI.
Affected Products
- rvc-boss gpt-sovits-webui version 20250228v3 and prior
Discovery Timeline
- 2025-07-15 - CVE-2025-49838 published to NVD
- 2025-07-30 - Last updated in NVD database
Technical Details for CVE-2025-49838
Vulnerability Analysis
This vulnerability represents a classic case of insecure deserialization in Python machine learning applications. PyTorch's torch.load() function uses Python's pickle module under the hood, which is inherently unsafe when loading untrusted data. When torch.load() deserializes a model file, it can execute arbitrary Python code embedded within the pickle payload.
The attack surface is exposed through the web interface where users can specify model paths. An attacker could host a malicious .pth file on a network-accessible location or upload it to the server, then reference this path through the web interface. When the application attempts to load this "model," the malicious payload executes with the privileges of the application process.
At the time of publication, no patched versions are available, making this vulnerability particularly dangerous for production deployments.
Root Cause
The root cause is the direct use of torch.load() on user-controlled file paths without proper input validation or safe loading mechanisms. The model_choose parameter flows from user input through multiple functions (uvr → AudioPreDeEcho constructor) without sanitization, eventually reaching the dangerous torch.load() call. PyTorch provides safer alternatives such as torch.load(..., weights_only=True) for loading model weights without executing arbitrary code, but these safeguards are not implemented in the vulnerable code.
Attack Vector
The attack is network-based and requires no authentication or user interaction. An attacker can exploit this vulnerability by:
- Crafting a malicious PyTorch model file containing a pickle payload with embedded Python code
- Making this file accessible to the target server (via file upload, network share, or path traversal)
- Submitting the path to this malicious file through the GPT-SoVITS-WebUI interface
- The application loads the file with torch.load(), triggering the deserialization attack
- The embedded malicious code executes with application privileges
The vulnerability exists in the vr.py file at line 216 where torch.load is called on the user-supplied model_path. Additional vulnerable entry points exist in webui.py at multiple locations as documented in the GitHub Security Advisory.
Detection Methods for CVE-2025-49838
Indicators of Compromise
- Unexpected .pth files appearing in model directories or temporary locations
- Unusual process spawning from the Python/GPT-SoVITS process
- Network connections initiated by the application to unexpected external hosts
- Suspicious file system access patterns originating from the web application
- Anomalous CPU or memory usage during model loading operations
Detection Strategies
- Monitor for torch.load() calls on untrusted or user-supplied file paths in application logs
- Implement file integrity monitoring on model directories to detect unauthorized modifications
- Deploy network-based intrusion detection rules to identify pickle exploitation payloads
- Use endpoint detection to identify unusual child processes spawned by the GPT-SoVITS application
- Audit access logs for unusual model path submissions through the web interface
Monitoring Recommendations
- Enable verbose logging for model loading operations in GPT-SoVITS-WebUI
- Implement real-time alerting for any torch.load operations on files outside whitelisted directories
- Monitor system call activity for the application process to detect post-exploitation behavior
- Track file uploads and model path submissions for anomalous patterns
How to Mitigate CVE-2025-49838
Immediate Actions Required
- Restrict network access to GPT-SoVITS-WebUI instances to trusted networks only
- Implement strict input validation for model path parameters, allowing only whitelisted directories
- Consider temporarily disabling the UVR5/AudioPreDeEcho functionality until a patch is available
- Deploy web application firewall rules to filter suspicious model path inputs
- Run the application in a containerized or sandboxed environment with minimal privileges
Patch Information
At the time of publication, no official patch is available from the vendor. Organizations should monitor the GPT-SoVITS GitHub repository for security updates. The GitHub Security Advisory (GHSL-2025-049 through GHSL-2025-053) provides additional technical details about the vulnerability.
Workarounds
- Modify the source code to use torch.load(model_path, weights_only=True) which prevents arbitrary code execution during deserialization
- Implement path validation to restrict model loading to a specific trusted directory
- Add file signature verification before loading any model files
- Deploy network segmentation to isolate GPT-SoVITS-WebUI instances from sensitive systems
# Example: Restrict model directory access and implement path validation
# Add to your deployment configuration
# Create dedicated model directory with restricted permissions
mkdir -p /opt/gpt-sovits/trusted_models
chmod 750 /opt/gpt-sovits/trusted_models
# Set environment variable to enforce trusted model directory
export GPT_SOVITS_MODEL_DIR=/opt/gpt-sovits/trusted_models
# Run with reduced privileges in container
docker run --read-only --user nobody:nogroup \
-v /opt/gpt-sovits/trusted_models:/models:ro \
gpt-sovits-webui
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.

