CVE-2026-2473 Overview
CVE-2026-2473 is a high-severity vulnerability affecting Google Cloud Vertex AI Experiments. The flaw stems from predictable bucket naming patterns that allow unauthenticated remote attackers to execute cross-tenant attacks through a technique known as "Bucket Squatting." By pre-creating Cloud Storage buckets with predictable names before legitimate tenants provision their resources, attackers can achieve remote code execution, model theft, and model poisoning across different Google Cloud tenants.
Critical Impact
This vulnerability enables unauthenticated attackers to compromise machine learning workloads across tenant boundaries, potentially leading to intellectual property theft, supply chain attacks through model poisoning, and arbitrary code execution within victim environments.
Affected Products
- Google Cloud Vertex AI versions 1.21.0 to 1.132.x (fixed in 1.133.0)
- Vertex AI Experiments feature utilizing Cloud Storage buckets
- Google Cloud Platform environments running affected Vertex AI versions
Discovery Timeline
- 2026-02-20 - CVE CVE-2026-2473 published to NVD
- 2026-02-23 - Last updated in NVD database
Technical Details for CVE-2026-2473
Vulnerability Analysis
This vulnerability is classified under CWE-340 (Generation of Predictable Numbers or Identifiers). The root issue lies in how Vertex AI Experiments generates Cloud Storage bucket names using a deterministic algorithm. When users create experiments, the service automatically provisions storage buckets following a predictable naming convention that can be reverse-engineered by attackers.
The attack requires network access and some user interaction but does not require authentication, making it exploitable by external threat actors. Successful exploitation results in complete compromise of confidentiality, integrity, and availability of the affected machine learning resources.
Root Cause
The vulnerability originates from the use of predictable identifiers when generating Cloud Storage bucket names for Vertex AI Experiments. The naming scheme incorporates predictable elements such as project identifiers, region codes, and sequential or timestamp-based suffixes that can be anticipated by attackers. This design flaw violates secure randomness principles for resource naming in multi-tenant cloud environments.
Attack Vector
The attack leverages network-accessible Cloud Storage APIs to pre-register buckets with names that Vertex AI will attempt to use for legitimate tenant workloads. The attack flow is as follows:
- Attacker analyzes the bucket naming pattern used by Vertex AI Experiments
- Attacker pre-creates buckets matching predicted names across target regions
- Victim tenant initiates Vertex AI Experiments workflow
- Vertex AI attempts to use the attacker-controlled bucket
- Attacker gains access to training data, models, or can inject malicious code/models
This "Bucket Squatting" technique enables cross-tenant attacks without requiring any authentication to the victim's Google Cloud environment. The attacker can steal proprietary ML models, poison training data or model artifacts, and potentially achieve code execution within the victim's compute environment when malicious artifacts are loaded.
Detection Methods for CVE-2026-2473
Indicators of Compromise
- Unexpected Cloud Storage bucket ownership or permissions on Vertex AI-related buckets
- Bucket creation timestamps that predate the associated Vertex AI Experiment creation
- Anomalous bucket access patterns from external principals or unfamiliar service accounts
- Model artifacts or training data showing signs of tampering or unexpected modifications
Detection Strategies
- Audit Cloud Storage bucket ownership and creation metadata for all Vertex AI Experiments
- Monitor for bucket access from principals outside the expected organization
- Implement continuous validation of model checksums and data integrity
- Review IAM policies on storage buckets for unauthorized access grants
Monitoring Recommendations
- Enable Cloud Audit Logs for all Cloud Storage operations in Vertex AI workloads
- Configure alerting for bucket creation events with suspicious naming patterns
- Monitor for cross-organization access attempts to ML training buckets
- Implement automated integrity checking for model artifacts before deployment
How to Mitigate CVE-2026-2473
Immediate Actions Required
- Verify your Google Cloud Vertex AI version is 1.133.0 or later
- Audit existing Vertex AI Experiments for bucket ownership anomalies
- Review model artifacts and training data for signs of tampering
- Enable Organization Policy constraints to restrict bucket creation to trusted principals
Patch Information
Google has released a patch addressing this vulnerability in Vertex AI version 1.133.0. According to the Google Cloud Security Bulletin, this vulnerability was patched server-side and no customer action is required. The fix implements cryptographically secure random bucket naming to prevent prediction and pre-creation attacks.
Workarounds
- If using affected versions, manually pre-create buckets with secure random names before initializing Vertex AI Experiments
- Implement Organization Policy constraints (constraints/storage.uniformBucketLevelAccess) to enforce access controls
- Use VPC Service Controls to create security perimeters around Vertex AI resources
- Consider temporarily pausing new Vertex AI Experiments creation until patch verification is complete
# Verify Vertex AI SDK version
pip show google-cloud-aiplatform | grep Version
# Audit bucket ownership for Vertex AI experiments
gcloud storage buckets list --filter="name~vertex" --format="table(name,createTime,owner)"
# Enable uniform bucket-level access for existing buckets
gcloud storage buckets update gs://YOUR_BUCKET --uniform-bucket-level-access
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.


