CVE-2026-33873 Overview
CVE-2026-33873 is a critical code injection vulnerability in Langflow, a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assistant feature in Langflow executes LLM-generated Python code during its validation phase. Although this phase appears intended to validate generated component code, the implementation reaches dynamic execution sinks and instantiates the generated class server-side. In deployments where an attacker can access the Agentic Assistant feature and influence the model output, this can result in arbitrary server-side Python execution.
Critical Impact
Attackers with access to the Agentic Assistant feature can achieve arbitrary Python code execution on the server by manipulating LLM-generated code during the validation phase, potentially leading to complete system compromise.
Affected Products
- Langflow versions prior to 1.9.0
- Langflow Agentic Assistant feature
- Langflow deployments with accessible AI assistant endpoints
Discovery Timeline
- 2026-03-27 - CVE-2026-33873 published to NVD
- 2026-03-30 - Last updated in NVD database
Technical Details for CVE-2026-33873
Vulnerability Analysis
This vulnerability represents a critical code injection flaw (CWE-94) in Langflow's Agentic Assistant feature. The core issue lies in how the application handles LLM-generated Python code during what should be a validation-only phase. Instead of safely analyzing the generated code, the validation process dynamically executes the code and instantiates generated classes on the server.
The attack surface is particularly dangerous because it involves AI-generated content that can be influenced by an attacker. By crafting specific prompts or manipulating the context provided to the LLM, an attacker can cause the model to generate malicious Python code that will be executed during the validation phase. This creates a novel attack vector where traditional input validation may be insufficient, as the malicious payload is generated by the AI model itself.
The vulnerability is network-accessible and requires only low privileges to exploit, making it particularly concerning for internet-facing Langflow deployments. Successful exploitation can lead to complete compromise of confidentiality and integrity on both the vulnerable system and potentially connected systems.
Root Cause
The root cause of this vulnerability stems from unsafe dynamic code execution within the validation pipeline. The validation code and code extraction helpers process LLM-generated code and pass it to execution sinks. The assistant service orchestrates this flow, ultimately instantiating the generated class rather than performing static analysis or sandboxed validation.
The fundamental security flaw is treating validation as requiring execution. Proper code validation should use static analysis techniques, Abstract Syntax Tree (AST) parsing, or sandboxed environments rather than direct execution in the application context.
Attack Vector
The attack is executed over the network by an authenticated user with access to the Agentic Assistant feature. The attacker exploits the vulnerability through the following mechanism:
- The attacker accesses the Agentic Assistant API endpoint exposed via the router
- Through prompt manipulation or context injection, the attacker influences the LLM to generate malicious Python code
- When the system attempts to validate the generated component code, it executes the malicious payload
- The attacker achieves arbitrary Python code execution on the server with the privileges of the Langflow application
The attack requires no user interaction beyond the attacker's own authentication, and the low attack complexity makes this vulnerability particularly dangerous. For detailed technical information, refer to the GitHub Security Advisory GHSA-v8hw-mh8c-jxfc.
Detection Methods for CVE-2026-33873
Indicators of Compromise
- Unusual Python process spawning from the Langflow application
- Unexpected network connections originating from the Langflow server
- Suspicious file system modifications in the application directory or system paths
- Anomalous API requests to the Agentic Assistant endpoints with unusual payloads or encoding
Detection Strategies
- Monitor Langflow application logs for validation errors or exceptions related to code execution
- Implement network traffic analysis to detect unusual outbound connections from Langflow servers
- Deploy endpoint detection and response (EDR) solutions to identify malicious process behavior
- Review audit logs for unusual patterns in Agentic Assistant feature usage
Monitoring Recommendations
- Enable verbose logging for the Agentic Assistant feature and monitor for code execution patterns
- Configure alerts for any Python subprocess execution initiated by the Langflow application
- Monitor system resource usage for unexpected CPU or memory spikes during validation operations
- Track authentication events and correlate with Agentic Assistant API access patterns
How to Mitigate CVE-2026-33873
Immediate Actions Required
- Upgrade Langflow to version 1.9.0 or later immediately
- If immediate upgrade is not possible, disable or restrict access to the Agentic Assistant feature
- Implement network segmentation to limit the blast radius of potential exploitation
- Review access controls and restrict Agentic Assistant feature to trusted users only
Patch Information
Version 1.9.0 of Langflow addresses this vulnerability. Organizations should prioritize upgrading to this version or later. The patch modifies the validation logic to prevent dynamic execution of LLM-generated code. Review the GitHub Security Advisory for complete patch details and upgrade instructions.
Workarounds
- Disable the Agentic Assistant feature entirely if not required for operations
- Implement strict network access controls to limit who can reach the Agentic Assistant API
- Deploy a Web Application Firewall (WAF) with rules to inspect and filter suspicious requests to Langflow endpoints
- Run Langflow in a containerized or sandboxed environment with minimal privileges and restricted system access
# Example: Restrict access to Agentic Assistant endpoint via nginx
location /api/v1/agentic {
# Allow only internal/trusted networks
allow 10.0.0.0/8;
allow 192.168.0.0/16;
deny all;
proxy_pass http://langflow_backend;
}
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.


