CVE-2025-49839 Overview
CVE-2025-49839 is an unsafe deserialization vulnerability affecting GPT-SoVITS-WebUI, a popular voice conversion and text-to-speech web interface. The vulnerability exists in the bsroformer.py file where user-supplied input is passed to torch.load() without proper validation, enabling attackers to execute arbitrary code by supplying a malicious model file.
In affected versions (20250228v3 and prior), the model_choose variable accepts user input representing a path to a model file. This input flows through the uvr function where a new instance of the Roformer_Loader class is created with the user-controlled path. The application appends a .ckpt extension to the path and subsequently uses torch.load() to deserialize the model file. Since torch.load() uses Python's pickle module internally, a malicious actor can craft a specially designed model file that executes arbitrary code upon deserialization.
Critical Impact
Remote attackers can achieve arbitrary code execution on systems running GPT-SoVITS-WebUI by providing a path to a malicious model file, potentially leading to complete system compromise.
Affected Products
- GPT-SoVITS-WebUI version 20250228v3 and all prior versions
- rvc-boss gpt-sovits-webui (all releases through 20250228v3)
Discovery Timeline
- July 15, 2025 - CVE-2025-49839 published to NVD
- July 30, 2025 - Last updated in NVD database
Technical Details for CVE-2025-49839
Vulnerability Analysis
This insecure deserialization vulnerability represents a dangerous code execution pathway in machine learning applications. The vulnerability chain begins when user input is accepted through the web interface as a model path parameter. The application's uvr function instantiates a Roformer_Loader object with this user-controlled path, which is stored as the model_path attribute after appending the .ckpt extension.
The critical security flaw occurs within the Roformer_Loader class when it calls torch.load() to deserialize the model file. PyTorch's torch.load() function utilizes Python's pickle deserialization mechanism by default, which is inherently unsafe when processing untrusted data. Pickle deserialization can execute arbitrary Python code embedded within the serialized object during the unpickling process.
An attacker who can influence the model path—whether through direct web interface interaction, path traversal techniques, or by placing malicious files on accessible storage—can trigger code execution with the privileges of the application process.
Root Cause
The root cause of this vulnerability is the use of torch.load() to deserialize model files from user-controlled paths without implementing safe deserialization practices. The application fails to:
- Validate or sanitize the user-supplied model path before use
- Restrict model loading to trusted, pre-approved locations
- Use PyTorch's weights_only=True parameter to prevent arbitrary code execution during deserialization
- Implement integrity verification for model files before loading
The underlying issue stems from trusting user input in a security-sensitive deserialization context, combined with the default unsafe behavior of pickle-based deserialization.
Attack Vector
The attack can be executed over the network by any unauthenticated user with access to the GPT-SoVITS-WebUI interface. The exploitation flow proceeds as follows:
- The attacker crafts a malicious .ckpt file containing a pickled Python object with embedded code execution logic
- The attacker uploads or places the malicious file in an accessible location
- Through the web interface, the attacker provides a path pointing to the malicious model file (minus the .ckpt extension which is appended automatically)
- The application instantiates Roformer_Loader with the malicious path and calls torch.load()
- During deserialization, the malicious pickle payload executes arbitrary code on the server
The vulnerability requires no user interaction or authentication, and exploitation can lead to remote code execution with the privileges of the web application process. For detailed technical implementation, refer to the GitHub Security Advisory GHSL-2025-049.
Detection Methods for CVE-2025-49839
Indicators of Compromise
- Unexpected model file paths being requested through the web interface, especially paths containing traversal sequences or pointing to unusual directories
- New or modified .ckpt files in model directories or temporary locations that were not legitimately created
- Anomalous process spawning or network connections originating from the GPT-SoVITS-WebUI Python process
- Log entries showing model loading operations with unusual file paths or failures followed by suspicious activity
Detection Strategies
- Monitor file system access patterns for the application, alerting on model file reads from non-standard directories
- Implement application-level logging to capture all model path inputs and flag paths containing traversal patterns or pointing outside expected directories
- Deploy endpoint detection to identify pickle deserialization exploitation patterns and subsequent malicious activity
- Use behavioral analysis to detect anomalous child processes spawned by the Python web application
Monitoring Recommendations
- Enable comprehensive request logging on the web interface to capture all model path parameters submitted by users
- Implement file integrity monitoring on model directories to detect unauthorized additions or modifications to .ckpt files
- Configure network monitoring to alert on unexpected outbound connections from the application server
- Deploy SentinelOne Singularity to detect and prevent post-exploitation activities such as reverse shells or lateral movement attempts
How to Mitigate CVE-2025-49839
Immediate Actions Required
- Restrict network access to GPT-SoVITS-WebUI instances to trusted users only; do not expose the interface to untrusted networks or the public internet
- Implement application-level input validation to restrict model paths to a predefined whitelist of trusted directories
- Deploy network segmentation to isolate systems running GPT-SoVITS-WebUI from critical infrastructure
- Monitor the RVC-Boss GPT-SoVITS GitHub repository for security updates and apply patches immediately when available
Patch Information
At the time of publication, no official patched version is available from the vendor. Users should monitor the project's GitHub repository and security advisories for patch releases. The GitHub Security Advisory GHSL-2025-049 contains additional details about the vulnerability disclosure.
For organizations that must continue using the affected software, implementing the workarounds below and maintaining strict access controls is critical until an official fix is released.
Workarounds
- Modify the source code to use torch.load() with the weights_only=True parameter, which prevents arbitrary code execution during deserialization by only allowing tensor data
- Implement strict path validation in the application to ensure model paths can only point to a predefined, trusted model directory
- Deploy the application in an isolated container or sandbox environment with minimal privileges to limit the impact of potential exploitation
- Use a reverse proxy with request filtering to validate and sanitize model path parameters before they reach the application
# Example: Run GPT-SoVITS-WebUI in an isolated Docker container with restricted capabilities
docker run -d \
--name gpt-sovits-isolated \
--network none \
--read-only \
--cap-drop ALL \
--security-opt no-new-privileges \
-v /path/to/trusted/models:/app/models:ro \
gpt-sovits-webui:latest
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.

