The SentinelOne Annual Threat Report - A Defenders Guide from the FrontlinesThe SentinelOne Annual Threat ReportGet the Report
Experiencing a Breach?Blog
Get StartedContact Us
SentinelOne
  • Platform
    Platform Overview
    • Singularity Platform
      Welcome to Integrated Enterprise Security
    • AI for Security
      Leading the Way in AI-Powered Security Solutions
    • Securing AI
      Accelerate AI Adoption with Secure AI Tools, Apps, and Agents.
    • How It Works
      The Singularity XDR Difference
    • Singularity Marketplace
      One-Click Integrations to Unlock the Power of XDR
    • Pricing & Packaging
      Comparisons and Guidance at a Glance
    Data & AI
    • Purple AI
      Accelerate SecOps with Generative AI
    • Singularity Hyperautomation
      Easily Automate Security Processes
    • AI-SIEM
      The AI SIEM for the Autonomous SOC
    • Data Pipelines
      Security Data Pipeline for AI SIEM and Data Optimization
    • Singularity Data Lake
      AI-Powered, Unified Data Lake
    • Singularity Data Lake for Log Analytics
      Seamlessly Ingest Data from On-Prem, Cloud or Hybrid Environments
    Endpoint Security
    • Singularity Endpoint
      Autonomous Prevention, Detection, and Response
    • Singularity XDR
      Native & Open Protection, Detection, and Response
    • Singularity RemoteOps Forensics
      Orchestrate Forensics at Scale
    • Singularity Threat Intelligence
      Comprehensive Adversary Intelligence
    • Singularity Vulnerability Management
      Application & OS Vulnerability Management
    • Singularity Identity
      Identity Threat Detection and Response
    Cloud Security
    • Singularity Cloud Security
      Block Attacks with an AI-Powered CNAPP
    • Singularity Cloud Native Security
      Secure Cloud and Development Resources
    • Singularity Cloud Workload Security
      Real-Time Cloud Workload Protection Platform
    • Singularity Cloud Data Security
      AI-Powered Threat Detection for Cloud Storage
    • Singularity Cloud Security Posture Management
      Detect and Remediate Cloud Misconfigurations
    Securing AI
    • Prompt Security
      Secure AI Tools Across Your Enterprise
  • Why SentinelOne?
    Why SentinelOne?
    • Why SentinelOne?
      Cybersecurity Built for What’s Next
    • Our Customers
      Trusted by the World’s Leading Enterprises
    • Industry Recognition
      Tested and Proven by the Experts
    • About Us
      The Industry Leader in Autonomous Cybersecurity
    Compare SentinelOne
    • Arctic Wolf
    • Broadcom
    • CrowdStrike
    • Cybereason
    • Microsoft
    • Palo Alto Networks
    • Sophos
    • Splunk
    • Trellix
    • Trend Micro
    • Wiz
    Verticals
    • Energy
    • Federal Government
    • Finance
    • Healthcare
    • Higher Education
    • K-12 Education
    • Manufacturing
    • Retail
    • State and Local Government
  • Services
    Managed Services
    • Managed Services Overview
      Wayfinder Threat Detection & Response
    • Threat Hunting
      World-Class Expertise and Threat Intelligence
    • Managed Detection & Response
      24/7/365 Expert MDR Across Your Entire Environment
    • Incident Readiness & Response
      DFIR, Breach Readiness, & Compromise Assessments
    Support, Deployment, & Health
    • Technical Account Management
      Customer Success with Personalized Service
    • SentinelOne GO
      Guided Onboarding & Deployment Advisory
    • SentinelOne University
      Live and On-Demand Training
    • Services Overview
      Comprehensive Solutions for Seamless Security Operations
    • SentinelOne Community
      Community Login
  • Partners
    Our Network
    • MSSP Partners
      Succeed Faster with SentinelOne
    • Singularity Marketplace
      Extend the Power of S1 Technology
    • Cyber Risk Partners
      Enlist Pro Response and Advisory Teams
    • Technology Alliances
      Integrated, Enterprise-Scale Solutions
    • SentinelOne for AWS
      Hosted in AWS Regions Around the World
    • Channel Partners
      Deliver the Right Solutions, Together
    • SentinelOne for Google Cloud
      Unified, Autonomous Security Giving Defenders the Advantage at Global Scale
    • Partner Locator
      Your Go-to Source for Our Top Partners in Your Region
    Partner Portal→
  • Resources
    Resource Center
    • Case Studies
    • Data Sheets
    • eBooks
    • Reports
    • Videos
    • Webinars
    • Whitepapers
    • Events
    View All Resources→
    Blog
    • Feature Spotlight
    • For CISO/CIO
    • From the Front Lines
    • Identity
    • Cloud
    • macOS
    • SentinelOne Blog
    Blog→
    Tech Resources
    • SentinelLABS
    • Ransomware Anthology
    • Cybersecurity 101
  • About
    About SentinelOne
    • About SentinelOne
      The Industry Leader in Cybersecurity
    • Investor Relations
      Financial Information & Events
    • SentinelLABS
      Threat Research for the Modern Threat Hunter
    • Careers
      The Latest Job Opportunities
    • Press & News
      Company Announcements
    • Cybersecurity Blog
      The Latest Cybersecurity Threats, News, & More
    • FAQ
      Get Answers to Our Most Frequently Asked Questions
    • DataSet
      The Live Data Platform
    • S Foundation
      Securing a Safer Future for All
    • S Ventures
      Investing in the Next Generation of Security, Data and AI
  • Pricing
Get StartedContact Us
CVE Vulnerability Database
Vulnerability Database/CVE-2024-12366

CVE-2024-12366: PandasAI Prompt Injection RCE Vulnerability

CVE-2024-12366 is a prompt injection vulnerability in PandasAI that enables remote code execution. Attackers can exploit the interactive prompt to run arbitrary Python code. This article covers technical details, impact, and mitigation.

Updated: January 21, 2026

CVE-2024-12366 Overview

CVE-2024-12366 is a critical prompt injection vulnerability in PandasAI, a popular Python library that enables natural language interactions with data through Large Language Models (LLMs). The vulnerability exists in an interactive prompt function that can be exploited to bypass intended LLM behavior and execute arbitrary Python code, resulting in Remote Code Execution (RCE) on the underlying system.

Critical Impact

Attackers can exploit this prompt injection flaw to execute arbitrary Python code on systems running PandasAI, potentially leading to complete system compromise, data exfiltration, or lateral movement within enterprise environments.

Affected Products

  • PandasAI Library (versions not specified in advisory)
  • Applications integrating PandasAI interactive prompt functionality
  • Data analysis pipelines utilizing PandasAI for natural language processing

Discovery Timeline

  • February 11, 2025 - CVE-2024-12366 published to NVD
  • February 11, 2025 - Last updated in NVD database

Technical Details for CVE-2024-12366

Vulnerability Analysis

This vulnerability represents a significant security flaw in the intersection of Large Language Models and code execution capabilities. PandasAI is designed to translate natural language queries into Python code that interacts with dataframes. However, the interactive prompt function fails to adequately sanitize or validate user inputs before they are processed by the LLM, creating a prompt injection attack surface.

Prompt injection attacks exploit the fundamental challenge of separating user data from instructions in LLM-based systems. In this case, an attacker can craft malicious input that manipulates the LLM into generating and executing arbitrary Python code rather than performing the intended data analysis operations. This bypasses the expected natural language processing workflow entirely.

The attack is particularly dangerous because it requires no authentication and can be executed remotely over the network. The complete loss of confidentiality, integrity, and availability is possible since arbitrary Python code execution provides attackers with full control over the execution environment.

Root Cause

The root cause of CVE-2024-12366 lies in insufficient input validation and lack of proper isolation between user-supplied prompts and the code execution layer. The interactive prompt function trusts LLM outputs without adequate sandboxing or security controls, allowing prompt injection payloads to escape the intended natural language processing context and achieve code execution.

LLM-based applications face an inherent challenge in distinguishing between legitimate instructions and malicious injected content within user inputs. Without explicit security boundaries, input sanitization, or code execution sandboxing, the PandasAI interactive function becomes susceptible to adversarial prompts designed to manipulate the LLM's behavior.

Attack Vector

The attack vector for CVE-2024-12366 is network-based, requiring no privileges or user interaction. An attacker can exploit this vulnerability by:

  1. Identifying an application or service utilizing PandasAI's interactive prompt functionality
  2. Crafting a malicious prompt that includes injection payloads designed to manipulate the LLM
  3. Submitting the crafted prompt through the vulnerable interactive function
  4. The LLM processes the malicious input and generates Python code containing the attacker's payload
  5. PandasAI executes the generated code without proper validation, achieving RCE

Typical prompt injection payloads may instruct the LLM to ignore previous instructions and instead generate code that imports system libraries, establishes reverse shells, reads sensitive files, or performs other malicious operations. The lack of code execution sandboxing means any Python code the LLM generates will run with the full privileges of the PandasAI process.

Detection Methods for CVE-2024-12366

Indicators of Compromise

  • Unusual Python process spawning or child process creation from PandasAI-related applications
  • Unexpected network connections originating from data analysis services or notebooks
  • System commands or shell execution patterns in application logs
  • Attempts to access sensitive files or environment variables through data analysis interfaces

Detection Strategies

  • Monitor application logs for anomalous prompt patterns containing instruction override attempts (e.g., "ignore previous instructions", "execute the following code")
  • Implement network traffic analysis to detect outbound connections from PandasAI processes to unexpected destinations
  • Deploy runtime application self-protection (RASP) to detect and block code injection attempts
  • Use behavioral analysis to identify processes spawned by PandasAI that deviate from normal data analysis operations

Monitoring Recommendations

  • Enable verbose logging for all PandasAI interactive prompt function calls
  • Implement alerting on Python subprocess creation or system command execution from data analysis contexts
  • Monitor for file system access patterns inconsistent with data analysis workflows
  • Track LLM API calls and responses for signs of prompt injection manipulation

How to Mitigate CVE-2024-12366

Immediate Actions Required

  • Audit all deployments utilizing PandasAI's interactive prompt functionality and assess exposure
  • Implement strict input validation and sanitization for all user-supplied prompts before processing
  • Consider disabling interactive prompt features until patches are available or adequate security controls are in place
  • Isolate PandasAI execution environments using containers or sandboxing technologies to limit blast radius

Patch Information

Organizations should consult the official PandasAI documentation and security advisories for patch availability. The Pandas AI Advanced Security Agent documentation provides guidance on implementing additional security controls. Additionally, the CERT Vulnerability Report #148244 offers further technical details and remediation guidance.

Workarounds

  • Deploy PandasAI in sandboxed environments with minimal privileges and restricted network access
  • Implement prompt filtering using allowlists to reject inputs containing known injection patterns
  • Use the PandasAI security agent features to add additional validation layers
  • Restrict code execution capabilities by disabling dangerous Python modules in the execution environment
bash
# Example: Run PandasAI in a restricted container environment
docker run --read-only \
  --network=none \
  --cap-drop=ALL \
  --security-opt=no-new-privileges \
  -v /path/to/data:/data:ro \
  pandas-ai-app

Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.

  • Vulnerability Details
  • TypeRCE

  • Vendor/TechPandasai

  • SeverityCRITICAL

  • CVSS Score9.8

  • EPSS Probability0.99%

  • Known ExploitedNo
  • CVSS Vector
  • CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
  • Impact Assessment
  • ConfidentialityLow
  • IntegrityNone
  • AvailabilityHigh
  • Technical References
  • Panda AI Privacy Security Guide

  • Pandas AI Advanced Security Agent

  • CERT Vulnerability Report #148244
  • Related CVEs
  • CVE-2026-4998: Sinaptik AI PandasAI RCE Vulnerability

  • CVE-2026-4997: PandasAI Path Traversal Vulnerability

  • CVE-2026-4996: Sinaptik AI PandasAI SQL Injection Flaw
Default Legacy - Prefooter | Experience the World’s Most Advanced Cybersecurity Platform

Experience the World’s Most Advanced Cybersecurity Platform

See how our intelligent, autonomous cybersecurity platform can protect your organization now and into the future.

Try SentinelOne
  • Get Started
  • Get a Demo
  • Product Tour
  • Why SentinelOne
  • Pricing & Packaging
  • FAQ
  • Contact
  • Contact Us
  • Customer Support
  • SentinelOne Status
  • Language
  • Platform
  • Singularity Platform
  • Singularity Endpoint
  • Singularity Cloud
  • Singularity AI-SIEM
  • Singularity Identity
  • Singularity Marketplace
  • Purple AI
  • Services
  • Wayfinder TDR
  • SentinelOne GO
  • Technical Account Management
  • Support Services
  • Verticals
  • Energy
  • Federal Government
  • Finance
  • Healthcare
  • Higher Education
  • K-12 Education
  • Manufacturing
  • Retail
  • State and Local Government
  • Cybersecurity for SMB
  • Resources
  • Blog
  • Labs
  • Case Studies
  • Videos
  • Product Tours
  • Events
  • Cybersecurity 101
  • eBooks
  • Webinars
  • Whitepapers
  • Press
  • News
  • Ransomware Anthology
  • Company
  • About Us
  • Our Customers
  • Careers
  • Partners
  • Legal & Compliance
  • Security & Compliance
  • Investor Relations
  • S Foundation
  • S Ventures

©2026 SentinelOne, All Rights Reserved.

Privacy Notice Terms of Use

English