CVE-2026-22813 Overview
CVE-2026-22813 is a critical Cross-Site Scripting (XSS) vulnerability in OpenCode, an open source AI coding agent. The vulnerability exists in the markdown renderer used for processing LLM (Large Language Model) responses, which inserts arbitrary HTML directly into the DOM without proper sanitization. The web interface lacks both DOMPurify sanitization and Content Security Policy (CSP) protections, allowing attackers who can control or manipulate LLM responses to achieve JavaScript execution within chat sessions.
Critical Impact
An attacker controlling LLM responses can achieve arbitrary JavaScript execution on the http://localhost:4096 origin, potentially leading to session hijacking, data exfiltration, and further exploitation of the local environment.
Affected Products
- OpenCode versions prior to 1.1.10
- OpenCode AI coding agent web interface
- Systems running OpenCode on localhost port 4096
Discovery Timeline
- 2026-01-12 - CVE CVE-2026-22813 published to NVD
- 2026-01-13 - Last updated in NVD database
Technical Details for CVE-2026-22813
Vulnerability Analysis
This vulnerability falls under CWE-79 (Improper Neutralization of Input During Web Page Generation), commonly known as Cross-Site Scripting. The core issue stems from the markdown renderer's failure to sanitize HTML content embedded within LLM responses before inserting them into the Document Object Model (DOM).
When OpenCode receives responses from the AI model, these responses are processed through a markdown renderer to display formatted content to the user. However, the renderer does not employ any HTML sanitization library such as DOMPurify to strip potentially malicious HTML tags and attributes. Furthermore, the web interface running on localhost:4096 operates without a Content Security Policy, which would otherwise serve as a defense-in-depth mechanism to prevent inline script execution even if malicious HTML were injected.
The attack scenario involves manipulating the LLM's output to include malicious JavaScript payloads. This could be accomplished through prompt injection techniques, compromised model responses, or man-in-the-middle attacks on the communication between the client and the AI backend.
Root Cause
The root cause is the absence of input sanitization in the markdown rendering pipeline combined with missing Content Security Policy headers on the web interface. The application trusts LLM responses as safe content and renders them directly without validating or cleaning HTML elements. This trust assumption creates a dangerous pathway where any HTML or JavaScript embedded in model responses will be executed in the browser context.
Attack Vector
The attack vector is network-based and requires user interaction. An attacker must find a way to influence the LLM's response content, which could be achieved through several methods:
- Prompt Injection: Crafting inputs that cause the LLM to include malicious HTML/JavaScript in its response
- Model Compromise: If the AI model or its API endpoint is compromised, attackers can inject arbitrary content
- Network Interception: Man-in-the-middle attacks could modify LLM responses in transit if communications are not properly secured
Once malicious JavaScript executes on the localhost:4096 origin, the attacker gains access to any data and functionality available to that origin, including stored credentials, session tokens, and the ability to make authenticated requests on behalf of the user.
The vulnerability mechanism involves injecting HTML tags such as <script>, <img onerror>, or event handler attributes directly through the markdown renderer. Without sanitization, these elements become active JavaScript execution points when rendered in the browser.
For technical details and proof-of-concept information, refer to the GitHub Security Advisory.
Detection Methods for CVE-2026-22813
Indicators of Compromise
- Unusual JavaScript execution or network requests originating from the OpenCode web interface
- LLM responses containing HTML tags such as <script>, <iframe>, or event handlers (onclick, onerror, etc.)
- Unexpected outbound connections from localhost:4096 to external domains
- Browser console errors or warnings related to blocked inline scripts (if CSP is added)
Detection Strategies
- Monitor LLM response content for HTML tags and JavaScript code patterns before rendering
- Implement logging on the OpenCode web interface to track rendered content and script execution attempts
- Use browser developer tools or network monitoring to identify suspicious outbound requests from the application
- Deploy web application firewalls or browser extensions that can detect and alert on XSS patterns
Monitoring Recommendations
- Enable verbose logging for all LLM interactions and response content
- Set up alerting for any network requests originating from the OpenCode origin to unexpected external endpoints
- Monitor for changes to local storage, session storage, or cookies associated with the localhost:4096 origin
- Consider implementing client-side integrity monitoring to detect DOM manipulation
How to Mitigate CVE-2026-22813
Immediate Actions Required
- Upgrade OpenCode to version 1.1.10 or later immediately
- Review recent chat sessions for any suspicious or unexpected content in LLM responses
- Clear browser cache and local storage associated with the OpenCode application
- Audit any API keys, credentials, or sensitive data that may have been accessible through the web interface
Patch Information
The vulnerability has been addressed in OpenCode version 1.1.10. The fix is available through the official release channels. Users should upgrade to this version or later to remediate the vulnerability. For detailed patch information and release notes, consult the GitHub Security Advisory.
Workarounds
- Avoid using the OpenCode web interface until upgraded to the patched version
- If web interface usage is necessary, disable JavaScript execution in the browser for the localhost:4096 origin using browser extensions
- Implement a local reverse proxy with CSP headers that block inline script execution
- Monitor and validate all LLM responses manually before trusting their content
# Example: Adding CSP headers via nginx reverse proxy (temporary workaround)
# Add to nginx configuration for the OpenCode proxy location
add_header Content-Security-Policy "default-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline';" always;
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.


