CVE-2024-6091 Overview
A command injection vulnerability exists in significant-gravitas/autogpt version 0.5.1 that allows an attacker to bypass the shell commands denylist settings. The issue arises when the denylist is configured to block specific commands, such as whoami and /bin/whoami. An attacker can circumvent this restriction by executing commands with a modified path, such as /bin/./whoami, which is not recognized by the denylist. This path traversal technique enables attackers to execute arbitrary system commands that should otherwise be blocked by security controls.
Critical Impact
This vulnerability allows remote attackers to bypass command execution restrictions and execute arbitrary shell commands on the host system, potentially leading to full system compromise.
Affected Products
- AutoGPT Classic version 0.5.1
- agpt autogpt_classic
Discovery Timeline
- 2024-09-11 - CVE CVE-2024-6091 published to NVD
- 2025-08-05 - Last updated in NVD database
Technical Details for CVE-2024-6091
Vulnerability Analysis
This vulnerability is classified under CWE-78 (Improper Neutralization of Special Elements used in an OS Command), commonly known as OS Command Injection. The flaw exists in AutoGPT's shell command execution filtering mechanism, which is designed to prevent potentially dangerous commands from being executed by the AI agent.
The denylist implementation performs a simple string comparison against a list of blocked commands. When a user attempts to execute a command like whoami, the system correctly identifies and blocks it. However, the path normalization logic fails to account for path traversal sequences within the command string.
By inserting a ./ sequence in the path (e.g., /bin/./whoami instead of /bin/whoami), an attacker can effectively bypass the denylist while the underlying operating system still resolves and executes the intended command. This occurs because the file system treats /bin/./whoami as equivalent to /bin/whoami, but the string comparison in the denylist check fails to match.
Root Cause
The root cause of this vulnerability lies in the inadequate path canonicalization before performing security checks against the denylist. The application compares raw command strings against the blocked list without first normalizing the paths to their canonical form. This allows various path manipulation techniques, including the use of . (current directory) and potentially .. (parent directory) sequences, to evade detection.
Proper mitigation requires resolving all paths to their canonical absolute form before comparison, ensuring that functionally equivalent paths are recognized as the same command regardless of how they are represented syntactically.
Attack Vector
The attack can be executed remotely over the network without requiring authentication or user interaction. An attacker with access to the AutoGPT interface can craft malicious inputs that instruct the AI agent to execute shell commands using obfuscated paths.
The exploitation technique involves:
- Identifying which commands are blocked by the denylist configuration
- Constructing equivalent commands using path manipulation (e.g., adding ./ sequences)
- Submitting the obfuscated command to AutoGPT for execution
- The system fails to recognize the command as blocked and executes it with full privileges
For example, if whoami is blocked, an attacker can use /bin/./whoami, ./whoami (if in the appropriate directory), or variations like /usr/../bin/whoami to achieve the same result.
Detection Methods for CVE-2024-6091
Indicators of Compromise
- Unusual shell command patterns in AutoGPT execution logs containing path traversal sequences like ./ or ../
- Commands being executed that should have been blocked by the configured denylist
- Unexpected system reconnaissance commands being run by the AutoGPT process
- Anomalous process spawning from the AutoGPT application with obfuscated command paths
Detection Strategies
- Monitor AutoGPT execution logs for commands containing path normalization characters (., ..) in unexpected positions
- Implement additional logging at the OS level to capture all commands spawned by the AutoGPT process
- Deploy file integrity monitoring on sensitive system binaries that may be targeted
- Use behavioral analysis to detect unusual command execution patterns from AI agent processes
Monitoring Recommendations
- Enable verbose logging in AutoGPT to capture all command execution attempts
- Configure SIEM alerts for shell commands containing path traversal patterns originating from AutoGPT
- Monitor for any outbound network connections following suspicious command execution
- Review AutoGPT agent activity logs for attempts to enumerate system information or access sensitive files
How to Mitigate CVE-2024-6091
Immediate Actions Required
- Upgrade AutoGPT Classic to a patched version that addresses this vulnerability
- Review and restrict the commands available to AutoGPT in your deployment
- Implement network segmentation to limit the impact of potential compromise
- Consider running AutoGPT in a sandboxed or containerized environment with minimal privileges
Patch Information
The vendor has released a fix for this vulnerability. The patch is available in the GitHub commit ef691359b774a1f9f80cf4f5ace9821967b718ed. Users should update to a version containing this fix immediately.
Additional details about the vulnerability discovery and disclosure can be found in the Huntr bounty report.
Workarounds
- Disable shell command execution entirely in AutoGPT if not required for your use case
- Implement additional command filtering at the operating system level using tools like AppArmor or SELinux
- Use a whitelist approach instead of a denylist to only allow explicitly approved commands
- Run AutoGPT under a restricted user account with minimal system access permissions
- Deploy AutoGPT in an isolated container with restricted system call capabilities
# Example: Running AutoGPT in a restricted container environment
docker run --security-opt=no-new-privileges \
--cap-drop=ALL \
--read-only \
--network=none \
autogpt:latest
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.


