CVE-2026-34447 Overview
CVE-2026-34447 is a symlink traversal vulnerability affecting the Open Neural Network Exchange (ONNX), an open standard for machine learning interoperability. Prior to version 1.21.0, the external data loading functionality allows attackers to read files outside the model directory by exploiting symlink traversal techniques. This vulnerability falls under CWE-22 (Path Traversal) and poses significant risk to systems processing untrusted ONNX models.
Critical Impact
Attackers can leverage this symlink traversal vulnerability to read sensitive files outside the intended model directory, potentially exposing configuration files, credentials, or other sensitive data on systems that load untrusted ONNX models.
Affected Products
- Open Neural Network Exchange (ONNX) versions prior to 1.21.0
- Applications and ML frameworks that process external ONNX models with external data references
- Machine learning pipelines that load ONNX models from untrusted sources
Discovery Timeline
- 2026-04-01 - CVE CVE-2026-34447 published to NVD
- 2026-04-01 - Last updated in NVD database
Technical Details for CVE-2026-34447
Vulnerability Analysis
This vulnerability exists in the external data loading mechanism of the ONNX framework. ONNX models can reference external data files to store large tensor weights and other binary data separately from the main model protobuf file. When loading these external data references, the ONNX library follows symlinks without proper validation, allowing an attacker to craft a malicious model that references files outside the intended model directory through symbolic links.
The impact is primarily information disclosure, as the vulnerability allows unauthorized read access to arbitrary files on the filesystem that are accessible to the process loading the ONNX model. This is particularly concerning in multi-tenant ML inference environments or any system that processes ONNX models from untrusted sources.
Root Cause
The root cause is insufficient validation of external data file paths during ONNX model loading. The library fails to properly validate that external data references remain within the model's directory boundary. When the external data loader encounters a symlink, it follows the link without checking whether the resolved path falls outside the expected model directory, enabling classic symlink traversal attacks.
Attack Vector
The attack requires local access in the sense that the attacker must be able to supply a malicious ONNX model file to a target system. The attack vector involves crafting an ONNX model that includes external data references pointing to symlinks, which in turn point to sensitive files outside the model directory. When the victim application loads this malicious model, the ONNX library follows the symlinks and reads the targeted files.
The attack scenario typically unfolds as follows: An attacker creates an ONNX model with external data tensors, prepares symlinks in the model directory pointing to target files such as /etc/passwd or application configuration files, and distributes this malicious model package. When the target system loads the model, the external data loading routine follows the symlinks and exposes the content of the targeted files.
Detection Methods for CVE-2026-34447
Indicators of Compromise
- ONNX model directories containing unexpected symbolic links pointing outside the model path
- File access attempts to sensitive system files from ML inference processes
- ONNX models with external data references containing suspicious path patterns (e.g., ../ sequences or absolute paths)
- Unusual file read operations originating from ONNX model loading functions
Detection Strategies
- Monitor filesystem activity from ML inference processes for access to files outside expected model directories
- Implement file integrity monitoring on ONNX model directories to detect introduction of symlinks
- Analyze incoming ONNX models for suspicious external data references before processing
- Deploy application-level logging to track which files are accessed during model loading operations
Monitoring Recommendations
- Enable audit logging for file access operations on sensitive directories by ML framework processes
- Implement network monitoring to detect exfiltration of data that may have been read via this vulnerability
- Configure alerts for symlink creation events within model storage directories
- Monitor for anomalous behavior patterns in ML inference pipelines that may indicate exploitation attempts
How to Mitigate CVE-2026-34447
Immediate Actions Required
- Upgrade ONNX to version 1.21.0 or later immediately on all systems processing ONNX models
- Audit existing ONNX model repositories for presence of symlinks or suspicious external data references
- Implement strict input validation for ONNX models received from external or untrusted sources
- Consider running ONNX model loading in sandboxed environments with restricted filesystem access
Patch Information
This vulnerability has been patched in ONNX version 1.21.0. Organizations should upgrade to this version or later. The fix implements proper path validation for external data references, ensuring that resolved file paths remain within the model directory boundary. For detailed information about the security fix, refer to the GitHub Security Advisory.
Workarounds
- Restrict ONNX model loading to only process models from trusted, verified sources until patching is complete
- Deploy filesystem access controls to prevent ML inference processes from reading sensitive files
- Use containerization with restricted filesystem mounts to limit potential exposure from symlink traversal
- Implement pre-processing validation scripts that scan ONNX model packages for symlinks before allowing them into production pipelines
For production environments where immediate patching is not feasible, consider implementing filesystem sandboxing for ONNX model loading operations. This can be achieved through container isolation with explicit volume mounts limited to required model directories, or through OS-level mechanisms such as AppArmor or SELinux policies that restrict file access from ML processes.
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.


