CVE-2025-27779 Overview
CVE-2025-27779 is an insecure deserialization vulnerability affecting Applio, a popular voice conversion tool. The vulnerability exists in the model_blender.py file at lines 20 and 21, where user-supplied input is passed to torch.load() without proper safety restrictions. This allows attackers to craft malicious model files that execute arbitrary code when loaded by the application.
The attack chain begins in voice_blender.py where user-supplied model paths (model_fusion_a and model_fusion_b) are passed to run_model_blender_script, which subsequently calls the model_blender function. This function loads the models using PyTorch's torch.load() without the weights_only=True parameter, making it vulnerable to Python pickle deserialization attacks.
Critical Impact
Remote code execution is possible through crafted malicious model files, allowing attackers to fully compromise systems running vulnerable versions of Applio.
Affected Products
- Applio versions 3.2.8-bugfix and prior
- All Applio installations using the vulnerable model_blender.py implementation
- Systems loading untrusted model files through the voice blender functionality
Discovery Timeline
- 2025-03-19 - CVE-2025-27779 published to NVD
- 2025-08-01 - Last updated in NVD database
Technical Details for CVE-2025-27779
Vulnerability Analysis
This vulnerability falls under CWE-502 (Deserialization of Untrusted Data). The core issue stems from PyTorch's torch.load() function using Python's pickle module for deserialization by default. Pickle is inherently unsafe when processing untrusted data because it can execute arbitrary Python code during the deserialization process.
In Applio, the voice blender feature allows users to specify paths to model files for fusion operations. The application accepts these paths without validation and passes them directly to torch.load(). An attacker can exploit this by supplying a path to a malicious pickle-based model file that contains embedded Python code, which executes when the file is loaded.
The network-accessible nature of this vulnerability, combined with no authentication requirements and low attack complexity, makes it particularly dangerous for any Applio deployment that processes external model files.
Root Cause
The root cause is the use of torch.load() without the weights_only=True parameter. By default, PyTorch uses pickle to serialize and deserialize model objects, which includes not just the weights but potentially arbitrary Python objects. When weights_only=True is not specified, the function will deserialize any pickled object, including those containing malicious __reduce__ methods that execute arbitrary code upon unpickling.
The vulnerable code flow is:
- User supplies model paths via voice_blender.py (lines 39-56)
- Paths are passed to run_model_blender_script
- model_blender function calls torch.load() on lines 20-21 without safety restrictions
- Malicious pickle payload executes during deserialization
Attack Vector
An attacker can exploit this vulnerability by:
- Crafting a malicious PyTorch model file containing a pickled object with a custom __reduce__ method
- Providing the path to this malicious file through the voice blender interface
- When Applio attempts to load the model, the pickle deserialization triggers arbitrary code execution
- The attacker gains code execution with the privileges of the Applio process
The attack is network-accessible and requires no authentication, making it exploitable in any scenario where an attacker can control or influence the model file path provided to the voice blender functionality.
# Security patch in rvc/infer/infer.py - added weights_only=True for all torch.load calls
weight_root (str): Path to the model weights.
"""
self.cpt = (
- torch.load(weight_root, map_location="cpu")
+ torch.load(weight_root, map_location="cpu", weights_only=True)
if os.path.isfile(weight_root)
else None
)
Source: GitHub Commit
# Security patch in rvc/lib/predictors/FCPE.py - added weights_only=True for all torch.load calls
if device is None:
device = "cuda" if torch.cuda.is_available() else "cpu"
self.device = device
- ckpt = torch.load(model_path, map_location=torch.device(self.device))
+ ckpt = torch.load(model_path, map_location=torch.device(self.device), weights_only=True)
self.args = DotDict(ckpt["config"])
self.dtype = dtype
model = FCPE(
Source: GitHub Commit
Detection Methods for CVE-2025-27779
Indicators of Compromise
- Unexpected process spawning from the Applio application process
- Network connections initiated by Applio to unknown external hosts
- Presence of suspicious or unrecognized model files in model directories
- Unusual file system activity following model loading operations
- Evidence of pickle-based payloads in model files (identifiable by pickle opcodes)
Detection Strategies
- Monitor for calls to torch.load() without the weights_only=True parameter in application logs
- Implement file integrity monitoring on model directories to detect unauthorized model file additions
- Deploy behavioral analysis to detect anomalous code execution following model loading operations
- Use static analysis tools to identify unsafe deserialization patterns in Python codebases
- Monitor for suspicious subprocess creation originating from Python processes running Applio
Monitoring Recommendations
- Enable comprehensive logging for all model loading operations in Applio
- Implement network monitoring to detect unexpected outbound connections from the Applio server
- Deploy endpoint detection and response (EDR) solutions to identify post-exploitation activity
- Establish baseline behavior for the Applio process and alert on deviations
- Monitor for privilege escalation attempts following Applio process compromise
How to Mitigate CVE-2025-27779
Immediate Actions Required
- Update Applio to the latest version from the main branch which contains the security patch
- Audit all model files currently in use and ensure they originate from trusted sources
- Restrict network access to Applio instances to trusted users only
- Implement input validation to only allow loading models from approved directories
- Consider running Applio in a sandboxed environment to limit potential damage from exploitation
Patch Information
The Applio development team has released a security patch available on the main branch of the repository. The fix adds the weights_only=True parameter to all torch.load() calls throughout the codebase, including in model_blender.py, rvc/infer/infer.py, and rvc/lib/predictors/FCPE.py.
The patch commit can be found at: GitHub Commit 11d1395
For detailed information about this vulnerability and related issues, see the GitHub Security Advisory.
Workarounds
- Manually apply the weights_only=True parameter to all torch.load() calls if immediate update is not possible
- Only load model files from trusted, verified sources and avoid loading user-submitted models
- Implement strict access controls to limit who can upload or specify model file paths
- Run Applio in a containerized environment with restricted privileges and network access
- Use application-level firewalls to restrict Applio's ability to make outbound connections
# Configuration example
# If you cannot immediately update, manually patch torch.load calls:
# In model_blender.py, change:
# torch.load(model_path)
# to:
# torch.load(model_path, weights_only=True)
# Alternatively, restrict model loading to verified paths only
# Example: Set environment variable for allowed model directory
export APPLIO_MODEL_DIR="/trusted/models/only"
# Run Applio with reduced privileges
sudo -u applio-service python app.py
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.


