AI’s Double Edge: How AI Expands the Attack Surface & Empowers Defenders

Recently, SentinelOne published two reports highlighting each side of the cloud security challenge:

  • The Cloud Security Survey Report presents insights from 400 cybersecurity managers and practitioners covering current cloud security operations, responsibilities, perceptions of technologies, and future investment plans.
  • The Cloud Security Risk Report details five emerging risk themes for 2025 with in-depth examples of attacks leveraging risks like cloud credential theft, lateral movement, vulnerable cloud storage, supply chain risks, and cloud AI services.

One shared thread across both reports is the notion of how AI has emerged as a double-edged sword, presenting risk and opportunity. Regarding risk, AI enables unprecedented functionality, but its elevated access to data and critical position in increasing business applications combined with its relative newness makes it an attractive target for adversaries. Securing AI should be a top priority for organizations actively building AI-powered services.

In terms of opportunity, many AI features seem match-made for security use cases. Context given and decisions made at machine speed, the ability to query against expert databases, and the ability to translate plain language into structured queries to name are just a few of the benefits organizations can reap from AI. AI is already proving to be an indispensable tool for defenders, significantly improving the speed, accuracy, and overall effectiveness of cloud security operations. As such, many survey responders view AI as the opportunity to balance the asymmetry of the cloud security challenge.

This blog post examines both Risk and Opportunity and the areas in which they intersect.

The Expanding AI Attack Surface

The rapid evolution and adoption of AI represent a new attack surface for threat actors. From exploiting misconfigured cloud AI services to compromising tools used to build AI applications to targeting the underlying infrastructure. While AI as a threat surface is relatively new, many risks will appear familiar, as the same challenges reappear — misconfigurations, vulnerabilities, access management, and supply chain risk. In fact, there are elements of AI-security that at this stage would still be considered nascent.

Old Threats Made New When Targeting AI

‘Dependency confusion’, wherein a malicious tool is mistaken for a trusted tool due to it having the same name, is an old threat. It has not been a credible risk for some time as most DevOps pipelines do not enable packages and/or tooling to be so easily swapped. Open-source package market places also no longer allow for the same name to be used. However, this old risk has become new again in the context of Model Context Protocol.

Model Context Protocol (MCP) enables LLMs to interact with external systems and allows you to build API enabled ‘jobs’. For example, a user can ask ChatGPT to set up a reoccurring task to check their Gmail account for emails from an employer and mark them as high urgency. This is then connected to another job that might then send the user SMS notifications.

While there is real business value in connecting external systems and data to LLMs, this opportunity is paired with risk. MCP constantly updates its context and this includes its range of tools. Currently, it uses the latest tool pulled into its context, which is where the dependency confusion risk returns. If a malicious tool with the same name is uploaded, that tool will be used in earnest. Other older threats have slight modifications to them. Similar to dependency confusion above, typosquatting has returned in a new form. Typosquatting describes how a threat actor deliberately mimics or makes a slight error to a name, in the hope that an unsuspecting user will click on the bad domain.

With the rise of LLM-assisted code development, the threat of typosquatting is reborn into what is now known as “vibe coding”. Though there may be certain gains in acceleration to be found from vibe coding, there is, unfortunately, an added security risk. LLMs have been seen hallucinating slight mistakes in names of open-source packages — a typosquatting phenomenon known as slopsquatting. From a threat actor perspective, there is an opportunity to look for common AI-hallucinated package names and upload a malicious package with the same name, in the hopes that other developers using the same LLM will generate the same incorrect package names, leading to downloads.

One researcher proved this concept with ChatGPT and the popular package Huggingface. The researcher created a dummy package with a hallucinated name to monitor requests from other users’ code that might use the same squat name, and it received more than 30,000 downloads from January to March of this year.

Other Familiar Threat Vectors

The first blog we published in this three-part series showed how misconfigurations and compromised credentials, despite being arguably basic attack vectors, continue to be the top two risks cloud security teams face due to increasingly sophisticated attacks. This theme holds true when looking at the AI attack surface.

Leaked Credentials

In addition to the risk of slopsquatting, developers leveraging LLMs to help generate code should consider the safety implications concerning leaked credentials before shipping to production. In a recent study across 20,000 repositories enabled with Copilot, the secret leakage percentage was found to be 39% than across all other repositories (6.4% leaked with Copilot vs 4.6% as an industry wide statistic). Additionally, a small scale study of the prompt-to-webapp business, Lovable, found that nearly half of the 2,000 domains analyzed were found to be leaking JSON Web Token, API Keys and cloud credentials.

An easy solve here might be to include safety instructions alongside the prompts. After all, in these instances the LLM is performing its primary role of developing code, it just has not been asked to keep in mind the extra consideration that the code be secure.

Misconfigurations

AI services provided by cloud providers like AWS SageMaker, Azure OpenAI, and Google Vertex AI are similar to cloud services like virtual machines or application services. Each resource could potentially be targeted by threat actors and exploited to compromise that service or gain access to the broader cloud environment. The key difference is that AI services are new and it will take time to ensure that default roles, permissions, and configurations aren’t overly permissive or otherwise grant access to sensitive actions.

There are a few scenarios relating to SageMaker detailed in the Risk Report. For example, when a SageMaker Studio notebook is launched, its user profile is created with a default AWS role. If attackers gain access to SageMaker Studio notebooks, they can leverage default permissions to retrieve metadata about existing tables and databases, delete original data, and replace it with new, misleading, or malicious data, compromising model accuracy and leading to faulty business decisions, an approach known as glue data poisoning. Another approach highlighted in the report is using secrets managers to cross domain access by enumerating and retrieving secrets from unrelated SageMaker domains or any secrets tagged with SageMaker=true, enabling privilege escalation, lateral movement, and data exfiltration across the AWS environment.

AI-Powered Threat Vectors

Also highlighted in our first blog was the resurgence of infostealers as they adapt to target cloud environments. Infostealers are malicious software designed to discreetly gather information from a target system by capturing keystrokes, extracting stored passwords, or scanning for sensitive files on the system and then sending the stolen information back to an attacker-controlled server.

While infostealers aren’t new, attackers are using AI to make infostealers more powerful, from synthesizing and enriching stolen information to enabling infostealers to autonomously adapt behavior during an attack. SentinelOne researchers uncovered Predator AI, a cloud-based infostealer that integrates with ChatGPT to automated data enrichment and add context to scanner results.

A Final Note on AI-Based Threats

This trend of increasing threats from existing tactics is something cloud security leaders are well aware of. When asked about their level of concern for specific cloud security risks and threats, security managers and practitioners indicated that their level of concern increased for every single threat category compared to last year’s survey. What is even more interesting is that the threat categories that increased the most, including accidental exposure of credentials, cryptomining and other cooption of cloud resources, and account hijacking, are threats that are obviously made more effective with AI.

Despite the new attack surfaces and sophistication of cloud-based attacks stemming from AI technology, AI is equally, if not more so, improving the critical tools defenders use and helping organizations amplify the human power behind their cloud security operations.

AI Security and Posture Management (AI-SPM)

AI-SPM is quickly becoming a critical tool to help defend against these attacks on AI attack surfaces. AI-SPM helps safeguard AI models, data, and infrastructure in cloud environments by automating the inventory of AI infrastructure and services, detecting AI-native misconfigurations, and visualizing attack paths for AI workloads. It pairs with Cloud Infrastructure Entitlement Management (CIEM) that monitors the users, roles and permissions interacting with cloud-native AI services.

Cloud security leaders emphatically agree they will benefit from AI in cloud security solutions — only 1.8% of managers and leaders surveyed said they do not expect to experience benefits from AI in cloud security solutions. The industry largely expects AI to assist.

Surprisingly, there is low recognition of the need for AI-SPM tools. When asked which cloud security technology is most important for defending their cloud environment, AI-SPM didn’t make the top 10 list. Acknowledging the impact of AI in cloud security but not prioritizing tools that specifically help with AI security is an interesting juxtaposition, highlighting the need for a more adaptive approach by security teams — something we unpack at the end of this blog.

AI Improving Cloud Security Tools

Effectively detecting threats in real time and proactively managing vulnerabilities to reduce your attack surface are arguably the two most important cloud security capabilities. When asked about the most impactful benefits of embedding AI in cloud security tools, more than half of respondents placed detecting attacks faster (51.8%) and better analyzing and scoring risks (50.3%) in their top five.

In the same vein, when asked which cloud security capabilities they had most confidence in for their organization, the top two capabilities and the two capabilities with the most improvement from last year were “Threat detection” (4.25 out of 5) and “Vulnerability scanning and assessment” (4.20 out of 5). All of this is a strong signal that AI is finally having a measurable impact on the speed, reach, and accuracy of threat detection, vulnerability scanning, and other cloud security technologies.

There is a similar impact on tools for managing and prioritizing cloud security alerts. More than half of organizations (53%) find that the majority of their alerts are false positives. While this is a harsh reality, the proliferation of false positives has pushed cloud security managers and practitioners to place incredible importance on having tools that help them prioritize alerts. Specifically, when asked how important having evidence of exploitability is for prioritizing alerts and closing high-priority items, 9 out of 10 (89.4%) respondents stated evidence of exploitability is either very important or extremely important.

Not only can AI help human analysts sift through alerts and reduce the volume of false positives, AI is also a fundamental ingredient in tools that provide evidence of exploitability through the application of advanced algorithms that analyze vast amounts of security data to identify patterns and prioritize risks.

AI Force Multiplying Cloud Security Operations

When asked how AI will impact respondent’s cloud security capabilities, their top expected benefits centered primarily on speed and effectiveness. 53.8% of respondents said AI will “accelerate incident response” and 51.8% expect to “detect attacks faster.” These advantages stem from AI’s ability to detect patterns associated with attacks in masses of data and to provide insights into effective responses.

Potentially, the most immediate benefit of leveraging AI in cloud security teams will be alleviating the burden from the security skills shortage. This is highlighted by 52% of respondents expecting AI to “increase the effectiveness of [their] current cloud security team,” moving this expected benefit of AI up from fourth place last year to second place this year.

Together, these results reflect recognition that AI can speed up processes, and help people make better decisions. AI enables senior security professionals to perform more tasks in the same time period, and less experienced ones to handle complex tasks sooner.

At SentinelOne, we’re already seeing customers realize these benefits by using Purple AI, the world’s most advanced AI security analyst. Purple AI customers are seeing up to 38% increase in security team efficiency and security team members using Purple AI are able to cover 61% more endpoints. As for speed, Purple AI contributes to 63% faster identification of security threats and 55% faster remediation of security threats.

The Path Forward in the Era of AI

The story of AI in cloud security in 2025 is clearly one of dichotomy. Threat actors are rapidly innovating, leveraging AI and automation to enhance their attack capabilities, find new vulnerabilities, and streamline their campaigns. However, the cybersecurity community is also harnessing AI as a powerful ally to accelerate incident response, enhance detection accuracy, and empower security teams.

This continuous evolution demands adaptive security strategies. Siloed approaches to security are no longer sufficient and defenders must use AI to get a holistic analysis of the path from the outside world to mission targets. Cloud security platforms must integrate AI and provide teams with comprehensive and integrated defense mechanisms across cloud environments.

If this topic is of interest to you, please join us at an upcoming webinar on Thursday, July 24, 2025 to learn more about AI’s evolving role in cloud security attacks and defenses along with other insights from the Cloud Risk Report and the 2025 Cloud Security Survey. Save your spot here!

Further Reading

Disclaimer

All third-party product names, logos, and brands mentioned in this publication are the property of their respective owners and are for identification purposes only. Use of these names, logos, and brands does not imply affiliation, endorsement, sponsorship, or association with the third-party.

The Cloud Security Challenge: Risk Intelligence & Leadership Perspectives
Sign up for this webinar happening July 24, 2025