The Convergence of Cloud Secrets & AI Risk

In 2025, the enterprise risk landscape experienced a paradigm shift: the adoption of AI and LLMs officially becoming the primary driver of cloud risk. Today, almost 88% of organizations now leverage AI in at least one business function. With this level of integration, the risk of AI is now outpacing traditional security guardrails, culminating in a highly complex and interconnected attack surface.

SentinelOne’s® new AI and Cloud Verified Exploit Paths and Secrets Scanning Report examines this evolving threatscape and draws on telemetry from over 11,000 anonymized customer environments to offer deeper visibility into how threat actors are actively exploiting modern cloud and AI infrastructures.

An Explosion of AI-Specific Secrets and Shadow AI

A primary finding of the 2026 report is the rising proliferation of AI-specific credentials. The data indicates that AI-related secrets — such as OpenAI API Keys, Azure OpenAI API Keys, and others — increased by approximately 140% in a span of one year. This growth correlates directly with the rapid embedding of AI technologies into customer support systems, internal tooling, financial platforms, and product experiences.

Ubiquitous deployment has generated a widespread organizational pattern known as “shadow AI” – the unsanctioned use of AI tools in an environment without formal IT approval or security oversight. In practice, this occurs when developers or internal teams utilize unmanaged or personal LLM keys to process corporate data outside of sanctioned IT or security channels. Since these AI integrations span numerous internal applications, the same API keys are frequently duplicated and stored within code repositories, SaaS configurations, and development scripts. Compounding this, these credentials are often implemented without proper access controls or routine rotation schedules.

The sprawl of these credentials renders them difficult to track via standard secrets management protocols, establishing a requirement for more centralized governance over how AI keys are issued and utilized.

Distinct Risk Vectors of Unmanaged AI Credentials

Unlike traditional cloud credentials that primarily facilitate resource manipulation, the compromise of AI keys introduces unique risk vectors. AI services frequently operate at the intersection of various enterprise systems, including CRM platforms, ticketing systems, and analytics tools, which means a single compromised LLM API key can provide an attacker with broad visibility into diverse datasets. The risks associated are categorized with exposed AI keys into two primary areas:

  • Data exposure and leakage: Unauthorized access via AI keys can expose sensitive or proprietary datasets processed by the models, embedded business logic, and internal user prompts and outputs. This enables attackers to harvest sensitive corporate conversations at scale.
  • Prompt injection and data poisoning: Unmanaged AI keys allow threat actors to actively manipulate AI models. Through prompt injection, an attacker can influence model behavior to exfiltrate data or bypass established security controls. Additionally, attackers can execute data poisoning by injecting misleading or malicious data into contextual corpora or fine-tuning datasets, which degrades the model’s integrity and reliability over time.

The Broadening Scope of Traditional Cloud Secrets

While AI credentials represent a novel attack surface, the traditional cloud secrets landscape has concurrently grown more complex. In 2025, organizations exposed approximately twice as many types of critical secrets as they did in 2024. This diversification spans AI platforms, cloud providers, SaaS services, and payment processors, pointing to how a single compromise can result in a broader blast radius across revenue-generating systems and infrastructure.

High-privilege cloud provider keys associated with AWS, Azure, and GCP remain the primary anchor of critical risk. The exposure of these keys can facilitate complete account takeover, infrastructure manipulation, and large-scale data exfiltration. As well, the exposure of payment gateway keys, such as those for Stripe and Razorpay, expands the potential damage by putting Personally Identifiable Information (PII) and financial data at risk, enabling the direct abuse of payment workflows.

Repository and CI/CD tokens also introduce supply chain risks, where high-severity credentials like a GITHUB_TOKEN can grant attackers direct access to deployment pipelines and source code, allowing a localized leak to escalate into a systemic infrastructure incident. From a collective standpoint, secrets exposure is exponentially spanning payments, coding, and software development workflows, making risk an interconnected and complex challenge.

Verified Exploit Paths: The Persistence of Legacy Vulnerabilities

To evaluate how these exposed secrets translate into practical risks, the SentinelOne researchers leveraged the Offensive Security Engine (OSE)™ to generate Verified Exploit Paths™. This technology analyzes misconfigurations, vulnerabilities, and exposed secrets in context to determine realistic exploitability.

The telemetry demonstrates that attackers generally do not rely on highly complex, theoretical attack chains. Instead, threat actors consistently exploit recurring entry points, specifically targeting misconfigured external services and widely abused Common Vulnerabilities and Exposures (CVEs). Notably, legacy vulnerabilities remain highly prevalent across customer environments and serve as reliable initial access points. The top verified exploit paths continue to involve older, critical CVEs, including:

Since these vulnerabilities are public and well-documented, threat actors possess proven techniques and automated tooling to exploit them whenever they persist in production environments. Once initial access is achieved through these legacy vulnerabilities, attackers routinely follow reachable secrets to pivot into additional services, such as utilizing an exposed key found in a cloud bucket to access an AI assistant, and subsequently, the customer data it processes.

Strategic Recommendations for Security Leaders

Addressing the interconnected risks of AI integration and cloud secrets requires a structured, objective approach to security architecture. The report outlines several concrete capabilities and practices including:

  • Continuous Surface Monitoring: Organizations must regularly inventory internet-facing assets, databases, and key cloud services, ensuring any configuration changes are immediately reflected in security posture assessments.
  • DevSecOps Automation: Security controls must be embedded directly into CI/CD pipelines and developer workflows. Organizations should automate the scanning of exposed secrets and trigger safe remediation actions, such as access revocation or key rotation.
  • Governance of AI Credentials: AI keys must be classified and treated as high-value credentials. Organizations should mandate the use of centrally managed AI keys rather than personal credentials, enforce least-privilege access, implement regular rotation schedules, and continuously monitor for shadow AI usage or abnormal access patterns.

Conclusion

As AI systems are increasingly built atop existing cloud, payment, and CI/CD platforms, weaknesses in traditional credentials inevitably become weaknesses in the AI infrastructures that rely upon them. The full report provides complete datasets and comprehensive exploit path models allowing today’s security teams to align their internal security policies with the realities of current threat actor behaviors. Learn more about the objective metrics behind the latest wave of credential exposure and vulnerability exploitation to establish more resilient and fully-controlled infrastructure architectures.

Third-Party Trademark Disclaimer:

All third-party product names, logos, and brands mentioned in this publication are the property of their respective owners and are for identification purposes only. Use of these names, logos, and brands does not imply affiliation, endorsement, sponsorship, or association with the third-party.

Expose the AI & Cloud Secrets That Put Your Data & Systems at Risk
This report draws on 11K+ customer environments. It shows how AI and cloud adoption are increasing secrets exposure and putting data at risk.