CVE-2026-34445 Overview
A critical attribute injection vulnerability has been identified in the Open Neural Network Exchange (ONNX) framework, an open standard for machine learning interoperability. Prior to version 1.21.0, the ExternalDataInfo class in ONNX used Python's setattr() function to load metadata (such as file paths or data lengths) directly from an ONNX model file without validating whether the keys in the file were legitimate attributes. This improper input validation allows an attacker to craft a malicious model file that overwrites internal object properties, potentially leading to object state corruption and denial of service conditions.
Critical Impact
Attackers can exploit this vulnerability by distributing malicious ONNX model files that corrupt internal object states, leading to denial of service or potentially more severe consequences when machine learning applications load untrusted models.
Affected Products
- Open Neural Network Exchange (ONNX) versions prior to 1.21.0
- Applications and ML pipelines that load external ONNX model files
- Machine learning frameworks integrating ONNX model loading functionality
Discovery Timeline
- April 1, 2026 - CVE-2026-34445 published to NVD
- April 1, 2026 - Last updated in NVD database
Technical Details for CVE-2026-34445
Vulnerability Analysis
This vulnerability stems from CWE-20 (Improper Input Validation) in the ONNX library's model loading mechanism. The ExternalDataInfo class is responsible for parsing metadata from ONNX model files when handling external data references. When loading a model, the class iterates through key-value pairs in the file and uses Python's setattr() function to dynamically assign these values to object attributes without any validation of the key names.
An attacker can exploit this behavior by crafting a malicious ONNX model file containing specially crafted metadata keys that correspond to internal object properties or Python dunder (double underscore) methods. When the vulnerable ONNX library loads this malicious model, the attacker-controlled keys overwrite legitimate internal attributes, corrupting the object's state and potentially disrupting normal program execution.
Root Cause
The root cause is the absence of input validation when dynamically setting object attributes in the ExternalDataInfo class. Python's setattr() function allows arbitrary attribute assignment, and without an allowlist of permitted attribute names, malicious input can manipulate object internals. The fix introduced in version 1.21.0 adds proper validation through a new _validate_external_data_file_bounds() method that sanitizes and validates external data before processing.
Attack Vector
The attack vector is network-based, requiring no authentication or user interaction. An attacker can distribute a maliciously crafted ONNX model file through various channels such as model repositories, supply chain attacks on ML pipelines, or direct delivery to applications that accept user-uploaded models. When the target application loads the malicious model using a vulnerable version of ONNX, the attribute injection payload executes automatically during the model parsing phase.
open_flags |= os.O_NOFOLLOW
fd = os.open(external_data_file_path, open_flags)
with os.fdopen(fd, "rb") as data_file:
- if info.offset is not None:
- data_file.seek(info.offset)
-
- raw_data = (
- data_file.read(info.length)
- if info.length is not None
- else data_file.read()
+ raw_data = ext_data._validate_external_data_file_bounds(
+ data_file, info, tensor.name
)
dtype = onnx.helper.tensor_dtype_to_np_dtype(tensor.data_type)
Source: GitHub Commit e30c6935d67cc3eca2fa284e37248e7c0036c46b
Detection Methods for CVE-2026-34445
Indicators of Compromise
- Unusual ONNX model files containing non-standard metadata keys or Python dunder attribute names
- Application crashes or unexpected behavior when loading ONNX models from untrusted sources
- Error logs indicating attribute assignment failures or type mismatches in ONNX model loading operations
Detection Strategies
- Implement file integrity monitoring for ONNX model files in production ML pipelines
- Deploy application logging to capture model loading events and any exceptions during the ExternalDataInfo initialization phase
- Use static analysis tools to scan ONNX model files for suspicious metadata key patterns before loading
Monitoring Recommendations
- Monitor ML application logs for unexpected exceptions during model deserialization
- Track and alert on ONNX library version usage across your infrastructure to identify vulnerable deployments
- Implement network monitoring for downloads of ONNX model files from untrusted sources
How to Mitigate CVE-2026-34445
Immediate Actions Required
- Upgrade ONNX to version 1.21.0 or later immediately across all affected systems
- Audit your ML pipelines to identify all locations where ONNX models are loaded from external or untrusted sources
- Implement model validation and integrity checks before loading any ONNX models in production environments
Patch Information
This vulnerability has been patched in ONNX version 1.21.0. The fix introduces the _validate_external_data_file_bounds() method in onnx/model_container.py which properly validates external data file bounds and sanitizes metadata before attribute assignment. For detailed patch information, refer to the GitHub Security Advisory GHSA-538c-55jv-c5g9 and the associated pull request.
Workarounds
- Restrict ONNX model loading to trusted, verified sources only until patching is complete
- Implement model file validation scripts that check for suspicious metadata keys before loading
- Consider sandboxing ML model loading operations in isolated environments with limited privileges
# Upgrade ONNX to patched version
pip install --upgrade onnx>=1.21.0
# Verify installed version
pip show onnx | grep Version
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.


