This presentation explores the emerging threat of LLM-enabled malware, where adversaries embed Large Language Model capabilities directly into malicious payloads. Unlike traditional malware, these threats generate malicious code at runtime rather than embedding it statically, creating significant detection challenges for security teams.
SentinelLABS’ Alex Delamotte and Gabriel Bernadett-Shapiro present their team’s research on how LLMs are weaponized in the wild, distinguishing between various adversarial uses, from AI-themed lures to genuine LLM-embedded malware. The research focused on malware that leverages LLM capabilities as a core operational component, exemplified by notable cases like PromptLock ransomware and APT28’s LameHug/PROMPTSTEAL campaigns.
The presentation reveals a fundamental flaw in the way much current LLM-enabled malware is coded: despite their adaptive capabilities, these threats hardcode artifacts like API keys and prompts. This dependency creates a detection opportunity. Delamotte and Bernade-Shapiro share two novel hunting strategies: wide API key detection using YARA rules to identify provider-specific key structures (such as OpenAI’s Base64-encoded identifiers), and prompt hunting that searches for hardcoded prompt structures within binaries.
A year-long retrohunt across VirusTotal identified over 7,000 samples containing 6,000+ unique API keys. By pairing prompt detection with lightweight LLM classifiers to assess malicious intent, the SentinelLABS researchers successfully discovered previously unknown samples, including “MalTerminal”, potentially the earliest known LLM-enabled malware.
The presentation addresses implications for defenders, highlighting how traditional detection signatures fail against runtime-generated code, while demonstrating that hunting for “prompts as code” and embedded API keys provides a viable detection methodology for this evolving threat landscape. A companion blog post was published by SentinelLABS here.
About the Authors
Alex Delamotte is a Senior Threat Researcher at SentinelOne. Over the past decade, Alex has worked with blue, purple, and red teams serving companies in the technology, financial, pharmaceuticals, and telecom sectors and she has shared research with several ISACs. Alex enjoys researching the intersection of cybercrime and state-sponsored activity.
Gabriel Bernadett-Shapiro is a Distinguished AI Research Scientist at SentinelOne, specializing in incorporating large language model (LLM) capabilities for security applications. He also serves as an Adjunct Lecturer at the Johns Hopkins SAIS Alperovitch Institute. Before joining SentinelOne, Gabriel helped launch OpenAI’s inaugural cyber capability-evaluation initiative and served as a senior analyst within Apple Information Security’s Threat Intelligence team.
About LABScon
This presentation was featured live at LABScon 2025, an immersive 3-day conference bringing together the world’s top cybersecurity minds, hosted by SentinelOne’s research arm, SentinelLabs.
Keep up with all the latest on LABScon here.