Join the Cyber Forum: Threat Intel on May 12, 2026 to learn how AI is reshaping threat defense.Join the Virtual Cyber Forum: Threat IntelRegister Now
Experiencing a Breach?Blog
Get StartedContact Us
SentinelOne
  • Platform
    Platform Overview
    • Singularity Platform
      Welcome to Integrated Enterprise Security
    • AI for Security
      Leading the Way in AI-Powered Security Solutions
    • Securing AI
      Accelerate AI Adoption with Secure AI Tools, Apps, and Agents.
    • How It Works
      The Singularity XDR Difference
    • Singularity Marketplace
      One-Click Integrations to Unlock the Power of XDR
    • Pricing & Packaging
      Comparisons and Guidance at a Glance
    Data & AI
    • Purple AI
      Accelerate SecOps with Generative AI
    • Singularity Hyperautomation
      Easily Automate Security Processes
    • AI-SIEM
      The AI SIEM for the Autonomous SOC
    • AI Data Pipelines
      Security Data Pipeline for AI SIEM and Data Optimization
    • Singularity Data Lake
      AI-Powered, Unified Data Lake
    • Singularity Data Lake for Log Analytics
      Seamlessly Ingest Data from On-Prem, Cloud or Hybrid Environments
    Endpoint Security
    • Singularity Endpoint
      Autonomous Prevention, Detection, and Response
    • Singularity XDR
      Native & Open Protection, Detection, and Response
    • Singularity RemoteOps Forensics
      Orchestrate Forensics at Scale
    • Singularity Threat Intelligence
      Comprehensive Adversary Intelligence
    • Singularity Vulnerability Management
      Application & OS Vulnerability Management
    • Singularity Identity
      Identity Threat Detection and Response
    Cloud Security
    • Singularity Cloud Security
      Block Attacks with an AI-Powered CNAPP
    • Singularity Cloud Native Security
      Secure Cloud and Development Resources
    • Singularity Cloud Workload Security
      Real-Time Cloud Workload Protection Platform
    • Singularity Cloud Data Security
      AI-Powered Threat Detection for Cloud Storage
    • Singularity Cloud Security Posture Management
      Detect and Remediate Cloud Misconfigurations
    Securing AI
    • Prompt Security
      Secure AI Tools Across Your Enterprise
  • Why SentinelOne?
    Why SentinelOne?
    • Why SentinelOne?
      Cybersecurity Built for What’s Next
    • Our Customers
      Trusted by the World’s Leading Enterprises
    • Industry Recognition
      Tested and Proven by the Experts
    • About Us
      The Industry Leader in Autonomous Cybersecurity
    Compare SentinelOne
    • Arctic Wolf
    • Broadcom
    • CrowdStrike
    • Cybereason
    • Microsoft
    • Palo Alto Networks
    • Sophos
    • Splunk
    • Trellix
    • Trend Micro
    • Wiz
    Verticals
    • Energy
    • Federal Government
    • Finance
    • Healthcare
    • Higher Education
    • K-12 Education
    • Manufacturing
    • Retail
    • State and Local Government
  • Services
    Managed Services
    • Managed Services Overview
      Wayfinder Threat Detection & Response
    • Threat Hunting
      World-Class Expertise and Threat Intelligence
    • Managed Detection & Response
      24/7/365 Expert MDR Across Your Entire Environment
    • Incident Readiness & Response
      DFIR, Breach Readiness, & Compromise Assessments
    Support, Deployment, & Health
    • Technical Account Management
      Customer Success with Personalized Service
    • SentinelOne GO
      Guided Onboarding & Deployment Advisory
    • SentinelOne University
      Live and On-Demand Training
    • Services Overview
      Comprehensive Solutions for Seamless Security Operations
    • SentinelOne Community
      Community Login
  • Partners
    Our Network
    • MSSP Partners
      Succeed Faster with SentinelOne
    • Singularity Marketplace
      Extend the Power of S1 Technology
    • Cyber Risk Partners
      Enlist Pro Response and Advisory Teams
    • Technology Alliances
      Integrated, Enterprise-Scale Solutions
    • SentinelOne for AWS
      Hosted in AWS Regions Around the World
    • Channel Partners
      Deliver the Right Solutions, Together
    • SentinelOne for Google Cloud
      Unified, Autonomous Security Giving Defenders the Advantage at Global Scale
    • Partner Locator
      Your Go-to Source for Our Top Partners in Your Region
    Partner Portal→
  • Resources
    Resource Center
    • Case Studies
    • Data Sheets
    • eBooks
    • Reports
    • Videos
    • Webinars
    • Whitepapers
    • Events
    View All Resources→
    Blog
    • Feature Spotlight
    • For CISO/CIO
    • From the Front Lines
    • Identity
    • Cloud
    • macOS
    • SentinelOne Blog
    Blog→
    Tech Resources
    • SentinelLABS
    • Ransomware Anthology
    • Cybersecurity 101
  • About
    About SentinelOne
    • About SentinelOne
      The Industry Leader in Cybersecurity
    • Investor Relations
      Financial Information & Events
    • SentinelLABS
      Threat Research for the Modern Threat Hunter
    • Careers
      The Latest Job Opportunities
    • Press & News
      Company Announcements
    • Cybersecurity Blog
      The Latest Cybersecurity Threats, News, & More
    • FAQ
      Get Answers to Our Most Frequently Asked Questions
    • DataSet
      The Live Data Platform
    • S Foundation
      Securing a Safer Future for All
    • S Ventures
      Investing in the Next Generation of Security, Data and AI
  • Pricing
Get StartedContact Us
CVE Vulnerability Database
Vulnerability Database/CVE-2023-29374

CVE-2023-29374: Langchain LLMMathChain RCE Vulnerability

CVE-2023-29374 is a remote code execution vulnerability in Langchain's LLMMathChain that enables prompt injection attacks to execute arbitrary code. This article covers the technical details, affected versions, and mitigation.

Published: February 4, 2026

CVE-2023-29374 Overview

CVE-2023-29374 is a critical prompt injection vulnerability in LangChain through version 0.0.131 that allows attackers to execute arbitrary code on affected systems. The vulnerability exists in the LLMMathChain chain component, which improperly handles user-supplied input, enabling malicious actors to inject crafted prompts that are subsequently executed via Python's exec method.

This vulnerability represents a significant security risk in AI/ML application development, as LangChain is widely used for building applications powered by large language models (LLMs). The ability to achieve remote code execution through prompt injection makes this a particularly dangerous attack vector in production environments.

Critical Impact

Attackers can achieve remote code execution by crafting malicious prompts that exploit the LLMMathChain component, potentially leading to complete system compromise, data exfiltration, and lateral movement within affected infrastructure.

Affected Products

  • LangChain versions through 0.0.131
  • Applications utilizing the LLMMathChain chain component
  • Systems exposing LangChain-based interfaces to untrusted user input

Discovery Timeline

  • April 5, 2023 - CVE-2023-29374 published to NVD
  • February 12, 2025 - Last updated in NVD database

Technical Details for CVE-2023-29374

Vulnerability Analysis

The vulnerability stems from improper input validation within the LLMMathChain component of LangChain. This chain is designed to process mathematical queries by leveraging LLMs to generate Python code that is then executed to produce results. However, the implementation fails to adequately sanitize user input before passing it to Python's exec function.

When an attacker crafts a specially designed prompt, the LLM can be manipulated into generating malicious Python code instead of legitimate mathematical operations. Since the output is executed without sufficient sandboxing or validation, arbitrary commands can be run with the privileges of the LangChain application process.

The vulnerability is classified under CWE-74 (Improper Neutralization of Special Elements in Output Used by a Downstream Component), commonly known as injection vulnerabilities. This classification reflects the core issue: untrusted input influencing the generation and execution of code.

Root Cause

The root cause of CVE-2023-29374 lies in the design of the LLMMathChain component, which relies on LLM-generated Python code being executed via the exec method without adequate security controls. The chain trusts that the LLM will only produce benign mathematical expressions, but this assumption fails when confronted with adversarial prompts.

The fundamental issue is the combination of:

  1. Accepting untrusted user input as part of the prompt
  2. Using the LLM output directly in code execution contexts
  3. Lack of sandboxing or output validation mechanisms

Attack Vector

The attack vector is network-based and requires no authentication or user interaction. An attacker can exploit this vulnerability by:

  1. Identifying an application that exposes LLMMathChain functionality to user input
  2. Crafting a malicious prompt that instructs the LLM to generate harmful Python code
  3. Submitting the prompt through the application's interface
  4. The generated malicious code is executed via Python's exec method
  5. The attacker achieves arbitrary code execution on the target system

The attack leverages prompt injection techniques to manipulate the LLM into producing code that performs actions beyond mathematical calculations. This can include system commands, file operations, network connections, or any other operation permitted by Python's exec function.

The vulnerability mechanism exploits the inherent trust placed in LLM-generated output. When a user submits a query to LLMMathChain, the system constructs a prompt that asks the LLM to solve a mathematical problem. By embedding carefully crafted instructions within the user's query, an attacker can override the intended behavior and cause the LLM to output malicious Python code. This code is then executed without validation, leading to remote code execution. For detailed technical analysis, refer to the GitHub Issue #1026 and the related Twitter post by @rharang.

Detection Methods for CVE-2023-29374

Indicators of Compromise

  • Unusual Python process spawning from LangChain application processes
  • Unexpected network connections originating from LLM-powered applications
  • Log entries showing malformed or suspicious mathematical queries containing Python code constructs
  • File system modifications in directories accessible by the LangChain application
  • Process execution patterns inconsistent with normal mathematical operations

Detection Strategies

  • Implement input validation logging to capture and analyze all queries submitted to LLMMathChain components
  • Deploy application-level monitoring to detect Python exec calls with unexpected payloads
  • Use behavioral analysis to identify anomalous code execution patterns within LangChain applications
  • Monitor for known prompt injection patterns and obfuscation techniques in user inputs

Monitoring Recommendations

  • Enable verbose logging for all LangChain chain executions, particularly LLMMathChain
  • Implement real-time alerting for process creation events from LangChain application contexts
  • Deploy network monitoring to detect unexpected outbound connections from AI/ML application servers
  • Review and audit all user-submitted queries for potential injection attempts

How to Mitigate CVE-2023-29374

Immediate Actions Required

  • Upgrade LangChain to a version newer than 0.0.131 that includes the security fix
  • Review all applications using LLMMathChain and assess exposure to untrusted input
  • Implement input sanitization and validation before passing user data to LangChain chains
  • Consider disabling or removing LLMMathChain functionality if not critical to operations
  • Apply network segmentation to isolate LangChain-based applications from sensitive systems

Patch Information

LangChain developers have addressed this vulnerability through GitHub Pull Request #1119. Users should upgrade to a patched version of LangChain to remediate this vulnerability. The security issue was initially reported and discussed in GitHub Issue #814.

Workarounds

  • Avoid exposing LLMMathChain functionality to untrusted user input until patched
  • Implement a whitelist of allowed mathematical operations and validate LLM output against it
  • Deploy sandboxing solutions such as containerization or restricted Python execution environments
  • Use alternative mathematical processing methods that do not rely on dynamic code execution
bash
# Upgrade LangChain to the latest version
pip install --upgrade langchain

# Verify installed version is newer than 0.0.131
pip show langchain | grep Version

Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.

  • Vulnerability Details
  • TypeRCE

  • Vendor/TechLangchain

  • SeverityCRITICAL

  • CVSS Score9.8

  • EPSS Probability3.17%

  • Known ExploitedNo
  • CVSS Vector
  • CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
  • Impact Assessment
  • ConfidentialityLow
  • IntegrityNone
  • AvailabilityHigh
  • CWE References
  • CWE-74
  • Technical References
  • GitHub Issue #1026

  • Twitter Image Post by @rharang
  • Vendor Resources
  • GitHub Issue #814

  • GitHub Pull Request #1119
  • Related CVEs
  • CVE-2026-30617: LangChain-ChatChat 0.3.1 RCE Vulnerability

  • CVE-2026-40087: LangChain RCE Vulnerability

  • CVE-2025-46059: LangChain GmailToolkit RCE Vulnerability

  • CVE-2024-46946: Langchain-experimental RCE Vulnerability
Default Legacy - Prefooter | Experience the World’s Most Advanced Cybersecurity Platform

Experience the World’s Most Advanced Cybersecurity Platform

See how our intelligent, autonomous cybersecurity platform can protect your organization now and into the future.

Try SentinelOne
  • Get Started
  • Get a Demo
  • Product Tour
  • Why SentinelOne
  • Pricing & Packaging
  • FAQ
  • Contact
  • Contact Us
  • Customer Support
  • SentinelOne Status
  • Language
  • Platform
  • Singularity Platform
  • Singularity Endpoint
  • Singularity Cloud
  • Singularity AI-SIEM
  • Singularity Identity
  • Singularity Marketplace
  • Purple AI
  • Services
  • Wayfinder TDR
  • SentinelOne GO
  • Technical Account Management
  • Support Services
  • Verticals
  • Energy
  • Federal Government
  • Finance
  • Healthcare
  • Higher Education
  • K-12 Education
  • Manufacturing
  • Retail
  • State and Local Government
  • Cybersecurity for SMB
  • Resources
  • Blog
  • Labs
  • Case Studies
  • Videos
  • Product Tours
  • Events
  • Cybersecurity 101
  • eBooks
  • Webinars
  • Whitepapers
  • Press
  • News
  • Ransomware Anthology
  • Company
  • About Us
  • Our Customers
  • Careers
  • Partners
  • Legal & Compliance
  • Security & Compliance
  • Investor Relations
  • S Foundation
  • S Ventures

©2026 SentinelOne, All Rights Reserved.

Privacy Notice Terms of Use

English