CVE-2024-12366 Overview
CVE-2024-12366 is a critical prompt injection vulnerability in PandasAI, a popular Python library that enables natural language interactions with data through Large Language Models (LLMs). The vulnerability exists in an interactive prompt function that can be exploited to bypass intended LLM behavior and execute arbitrary Python code, resulting in Remote Code Execution (RCE) on the underlying system.
Critical Impact
Attackers can exploit this prompt injection flaw to execute arbitrary Python code on systems running PandasAI, potentially leading to complete system compromise, data exfiltration, or lateral movement within enterprise environments.
Affected Products
- PandasAI Library (versions not specified in advisory)
- Applications integrating PandasAI interactive prompt functionality
- Data analysis pipelines utilizing PandasAI for natural language processing
Discovery Timeline
- February 11, 2025 - CVE-2024-12366 published to NVD
- February 11, 2025 - Last updated in NVD database
Technical Details for CVE-2024-12366
Vulnerability Analysis
This vulnerability represents a significant security flaw in the intersection of Large Language Models and code execution capabilities. PandasAI is designed to translate natural language queries into Python code that interacts with dataframes. However, the interactive prompt function fails to adequately sanitize or validate user inputs before they are processed by the LLM, creating a prompt injection attack surface.
Prompt injection attacks exploit the fundamental challenge of separating user data from instructions in LLM-based systems. In this case, an attacker can craft malicious input that manipulates the LLM into generating and executing arbitrary Python code rather than performing the intended data analysis operations. This bypasses the expected natural language processing workflow entirely.
The attack is particularly dangerous because it requires no authentication and can be executed remotely over the network. The complete loss of confidentiality, integrity, and availability is possible since arbitrary Python code execution provides attackers with full control over the execution environment.
Root Cause
The root cause of CVE-2024-12366 lies in insufficient input validation and lack of proper isolation between user-supplied prompts and the code execution layer. The interactive prompt function trusts LLM outputs without adequate sandboxing or security controls, allowing prompt injection payloads to escape the intended natural language processing context and achieve code execution.
LLM-based applications face an inherent challenge in distinguishing between legitimate instructions and malicious injected content within user inputs. Without explicit security boundaries, input sanitization, or code execution sandboxing, the PandasAI interactive function becomes susceptible to adversarial prompts designed to manipulate the LLM's behavior.
Attack Vector
The attack vector for CVE-2024-12366 is network-based, requiring no privileges or user interaction. An attacker can exploit this vulnerability by:
- Identifying an application or service utilizing PandasAI's interactive prompt functionality
- Crafting a malicious prompt that includes injection payloads designed to manipulate the LLM
- Submitting the crafted prompt through the vulnerable interactive function
- The LLM processes the malicious input and generates Python code containing the attacker's payload
- PandasAI executes the generated code without proper validation, achieving RCE
Typical prompt injection payloads may instruct the LLM to ignore previous instructions and instead generate code that imports system libraries, establishes reverse shells, reads sensitive files, or performs other malicious operations. The lack of code execution sandboxing means any Python code the LLM generates will run with the full privileges of the PandasAI process.
Detection Methods for CVE-2024-12366
Indicators of Compromise
- Unusual Python process spawning or child process creation from PandasAI-related applications
- Unexpected network connections originating from data analysis services or notebooks
- System commands or shell execution patterns in application logs
- Attempts to access sensitive files or environment variables through data analysis interfaces
Detection Strategies
- Monitor application logs for anomalous prompt patterns containing instruction override attempts (e.g., "ignore previous instructions", "execute the following code")
- Implement network traffic analysis to detect outbound connections from PandasAI processes to unexpected destinations
- Deploy runtime application self-protection (RASP) to detect and block code injection attempts
- Use behavioral analysis to identify processes spawned by PandasAI that deviate from normal data analysis operations
Monitoring Recommendations
- Enable verbose logging for all PandasAI interactive prompt function calls
- Implement alerting on Python subprocess creation or system command execution from data analysis contexts
- Monitor for file system access patterns inconsistent with data analysis workflows
- Track LLM API calls and responses for signs of prompt injection manipulation
How to Mitigate CVE-2024-12366
Immediate Actions Required
- Audit all deployments utilizing PandasAI's interactive prompt functionality and assess exposure
- Implement strict input validation and sanitization for all user-supplied prompts before processing
- Consider disabling interactive prompt features until patches are available or adequate security controls are in place
- Isolate PandasAI execution environments using containers or sandboxing technologies to limit blast radius
Patch Information
Organizations should consult the official PandasAI documentation and security advisories for patch availability. The Pandas AI Advanced Security Agent documentation provides guidance on implementing additional security controls. Additionally, the CERT Vulnerability Report #148244 offers further technical details and remediation guidance.
Workarounds
- Deploy PandasAI in sandboxed environments with minimal privileges and restricted network access
- Implement prompt filtering using allowlists to reject inputs containing known injection patterns
- Use the PandasAI security agent features to add additional validation layers
- Restrict code execution capabilities by disabling dangerous Python modules in the execution environment
# Example: Run PandasAI in a restricted container environment
docker run --read-only \
--network=none \
--cap-drop=ALL \
--security-opt=no-new-privileges \
-v /path/to/data:/data:ro \
pandas-ai-app
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.


