CVE-2025-3730 Overview
A denial of service vulnerability has been identified in PyTorch 2.6.0 affecting the torch.nn.functional.ctc_loss function located in the file aten/src/ATen/native/LossCTC.cpp. The vulnerability allows an attacker with local access to cause a denial of service condition through manipulation of the function's input parameters. The exploit has been publicly disclosed, though the real existence of this vulnerability is still under investigation. A patch has been made available to address the issue.
Critical Impact
Local attackers can cause denial of service conditions in PyTorch machine learning applications by exploiting improper input validation in the CTC loss function, potentially disrupting model training and inference pipelines.
Affected Products
- PyTorch 2.6.0 (Python package)
- Linux Foundation PyTorch distributions
- Applications using torch.nn.functional.ctc_loss function
Discovery Timeline
- April 16, 2025 - CVE-2025-3730 published to NVD
- May 28, 2025 - Last updated in NVD database
Technical Details for CVE-2025-3730
Vulnerability Analysis
This vulnerability exists in PyTorch's Connectionist Temporal Classification (CTC) loss implementation, a function commonly used in speech recognition and handwriting recognition neural networks. The affected function torch.nn.functional.ctc_loss fails to properly validate the log_probs tensor parameter before processing, allowing an attacker to pass an empty tensor that causes the application to crash or behave unexpectedly.
The vulnerability requires local access to exploit, meaning an attacker must be able to execute code on the target system or supply malicious input through an application that uses the vulnerable function. This is particularly concerning in environments where untrusted machine learning models are loaded and executed, as the PyTorch security policy specifically warns against using unknown models that might establish malicious effects.
Root Cause
The root cause is classified under CWE-404: Improper Resource Shutdown or Release, though more specifically, the issue stems from missing input validation on the log_probs tensor parameter. The CTC loss function did not verify that the input tensor contains valid data (i.e., numel() > 0) before proceeding with computations, leading to potential crashes or undefined behavior when processing empty tensors.
Attack Vector
The attack requires local access to the system running PyTorch. An attacker can exploit this vulnerability by:
- Crafting a malicious PyTorch model or script that calls torch.nn.functional.ctc_loss with an empty log_probs tensor
- Loading an untrusted model that contains code triggering this vulnerability
- Providing malformed input data that results in empty tensors being passed to the CTC loss function
The vulnerability affects both CPU and CUDA (GPU) implementations of the CTC loss function.
// Security patch for CPU implementation - aten/src/ATen/native/LossCTC.cpp
// Source: https://github.com/timocafe/tewart-pytorch/commit/46fc5d8e360127361211cb237d5f9eef0223e567
// the alphas from the user by only returning the loss.
template<typename scalar_t, ScalarType target_scalar_type>
std::tuple<Tensor, Tensor> ctc_loss_cpu_template(const Tensor& log_probs, const Tensor& targets, IntArrayRef input_lengths, IntArrayRef target_lengths, int64_t BLANK) {
+ TORCH_CHECK(log_probs.numel() > 0, "log_probs tensor must not be empty");
// log_probs: input_len x batch_size x num_labels
// targets [int64]: batch_size x target_length OR sum(target_lengths)
constexpr scalar_t neginf = -std::numeric_limits<scalar_t>::infinity();
Detection Methods for CVE-2025-3730
Indicators of Compromise
- Unexpected application crashes in PyTorch-based services with stack traces referencing ctc_loss or LossCTC.cpp
- Error logs indicating tensor-related exceptions during model training or inference
- Unusual resource consumption patterns in ML pipelines using CTC loss functions
Detection Strategies
- Monitor PyTorch application logs for crashes involving the torch.nn.functional.ctc_loss function
- Implement input validation checks in application code before calling CTC loss functions
- Use static analysis tools to identify code paths that may pass unchecked tensors to vulnerable functions
- Deploy runtime monitoring for PyTorch applications to detect abnormal terminations
Monitoring Recommendations
- Enable verbose logging for PyTorch applications in production environments
- Set up alerting for repeated crashes in ML inference services
- Monitor for attempts to load untrusted or unverified PyTorch models
- Track resource utilization patterns that may indicate denial of service attempts
How to Mitigate CVE-2025-3730
Immediate Actions Required
- Apply the security patch with commit hash 46fc5d8e360127361211cb237d5f9eef0223e567
- Validate all input tensors before passing them to torch.nn.functional.ctc_loss
- Review and restrict the loading of untrusted PyTorch models
- Implement input validation wrappers around CTC loss function calls
Patch Information
A patch is available for this vulnerability. The fix adds proper input validation to check that the log_probs tensor is not empty before processing. The patch has been applied to both the CPU implementation (aten/src/ATen/native/LossCTC.cpp) and the CUDA implementation (aten/src/ATen/native/cuda/LossCTC.cu).
Relevant resources:
Workarounds
- Add explicit validation in application code to ensure tensors passed to ctc_loss are non-empty
- Follow PyTorch security policy recommendations by avoiding the use of unknown or untrusted models
- Implement sandboxing or isolation for environments that execute untrusted ML models
- Consider using containerization to limit the blast radius of potential DoS attacks
# Input validation wrapper example
import torch
def safe_ctc_loss(log_probs, targets, input_lengths, target_lengths, blank=0):
# Validate log_probs tensor is not empty before calling ctc_loss
if log_probs.numel() == 0:
raise ValueError("log_probs tensor must not be empty")
return torch.nn.functional.ctc_loss(
log_probs, targets, input_lengths, target_lengths, blank=blank
)
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.

