The SentinelOne Annual Threat Report - A Defenders Guide from the FrontlinesThe SentinelOne Annual Threat ReportGet the Report
Experiencing a Breach?Blog
Get StartedContact Us
SentinelOne
  • Platform
    Platform Overview
    • Singularity Platform
      Welcome to Integrated Enterprise Security
    • AI for Security
      Leading the Way in AI-Powered Security Solutions
    • Securing AI
      Accelerate AI Adoption with Secure AI Tools, Apps, and Agents.
    • How It Works
      The Singularity XDR Difference
    • Singularity Marketplace
      One-Click Integrations to Unlock the Power of XDR
    • Pricing & Packaging
      Comparisons and Guidance at a Glance
    Data & AI
    • Purple AI
      Accelerate SecOps with Generative AI
    • Singularity Hyperautomation
      Easily Automate Security Processes
    • AI-SIEM
      The AI SIEM for the Autonomous SOC
    • Singularity Data Lake
      AI-Powered, Unified Data Lake
    • Singularity Data Lake for Log Analytics
      Seamlessly Ingest Data from On-Prem, Cloud or Hybrid Environments
    Endpoint Security
    • Singularity Endpoint
      Autonomous Prevention, Detection, and Response
    • Singularity XDR
      Native & Open Protection, Detection, and Response
    • Singularity RemoteOps Forensics
      Orchestrate Forensics at Scale
    • Singularity Threat Intelligence
      Comprehensive Adversary Intelligence
    • Singularity Vulnerability Management
      Application & OS Vulnerability Management
    • Singularity Identity
      Identity Threat Detection and Response
    Cloud Security
    • Singularity Cloud Security
      Block Attacks with an AI-Powered CNAPP
    • Singularity Cloud Native Security
      Secure Cloud and Development Resources
    • Singularity Cloud Workload Security
      Real-Time Cloud Workload Protection Platform
    • Singularity Cloud Data Security
      AI-Powered Threat Detection for Cloud Storage
    • Singularity Cloud Security Posture Management
      Detect and Remediate Cloud Misconfigurations
    Securing AI
    • Prompt Security
      Secure AI Tools Across Your Enterprise
  • Why SentinelOne?
    Why SentinelOne?
    • Why SentinelOne?
      Cybersecurity Built for What’s Next
    • Our Customers
      Trusted by the World’s Leading Enterprises
    • Industry Recognition
      Tested and Proven by the Experts
    • About Us
      The Industry Leader in Autonomous Cybersecurity
    Compare SentinelOne
    • Arctic Wolf
    • Broadcom
    • CrowdStrike
    • Cybereason
    • Microsoft
    • Palo Alto Networks
    • Sophos
    • Splunk
    • Trellix
    • Trend Micro
    • Wiz
    Verticals
    • Energy
    • Federal Government
    • Finance
    • Healthcare
    • Higher Education
    • K-12 Education
    • Manufacturing
    • Retail
    • State and Local Government
  • Services
    Managed Services
    • Managed Services Overview
      Wayfinder Threat Detection & Response
    • Threat Hunting
      World-Class Expertise and Threat Intelligence
    • Managed Detection & Response
      24/7/365 Expert MDR Across Your Entire Environment
    • Incident Readiness & Response
      DFIR, Breach Readiness, & Compromise Assessments
    Support, Deployment, & Health
    • Technical Account Management
      Customer Success with Personalized Service
    • SentinelOne GO
      Guided Onboarding & Deployment Advisory
    • SentinelOne University
      Live and On-Demand Training
    • Services Overview
      Comprehensive Solutions for Seamless Security Operations
    • SentinelOne Community
      Community Login
  • Partners
    Our Network
    • MSSP Partners
      Succeed Faster with SentinelOne
    • Singularity Marketplace
      Extend the Power of S1 Technology
    • Cyber Risk Partners
      Enlist Pro Response and Advisory Teams
    • Technology Alliances
      Integrated, Enterprise-Scale Solutions
    • SentinelOne for AWS
      Hosted in AWS Regions Around the World
    • Channel Partners
      Deliver the Right Solutions, Together
    • SentinelOne for Google Cloud
      Unified, Autonomous Security Giving Defenders the Advantage at Global Scale
    • Partner Locator
      Your Go-to Source for Our Top Partners in Your Region
    Partner Portal→
  • Resources
    Resource Center
    • Case Studies
    • Data Sheets
    • eBooks
    • Reports
    • Videos
    • Webinars
    • Whitepapers
    • Events
    View All Resources→
    Blog
    • Feature Spotlight
    • For CISO/CIO
    • From the Front Lines
    • Identity
    • Cloud
    • macOS
    • SentinelOne Blog
    Blog→
    Tech Resources
    • SentinelLABS
    • Ransomware Anthology
    • Cybersecurity 101
  • About
    About SentinelOne
    • About SentinelOne
      The Industry Leader in Cybersecurity
    • Investor Relations
      Financial Information & Events
    • SentinelLABS
      Threat Research for the Modern Threat Hunter
    • Careers
      The Latest Job Opportunities
    • Press & News
      Company Announcements
    • Cybersecurity Blog
      The Latest Cybersecurity Threats, News, & More
    • FAQ
      Get Answers to Our Most Frequently Asked Questions
    • DataSet
      The Live Data Platform
    • S Foundation
      Securing a Safer Future for All
    • S Ventures
      Investing in the Next Generation of Security, Data and AI
  • Pricing
Get StartedContact Us
CVE Vulnerability Database
Vulnerability Database/CVE-2024-34359

CVE-2024-34359: llama-cpp-python RCE Vulnerability

CVE-2024-34359 is a remote code execution vulnerability in llama-cpp-python caused by unsafe Jinja2 template parsing. Attackers can exploit this flaw to execute arbitrary code. This article covers technical details, impact, and mitigation.

Updated: January 22, 2026

CVE-2024-34359 Overview

CVE-2024-34359 is a Server-Side Template Injection (SSTI) vulnerability affecting llama-cpp-python, the Python bindings for the llama.cpp large language model framework. The vulnerability exists in how the library processes chat templates from .gguf model files, allowing attackers to achieve remote code execution through maliciously crafted model metadata.

The Llama class in llama.py loads chat templates from .gguf file metadata and passes them to Jinja2ChatFormatter.to_chat_handler() without proper sandboxing. This unsandboxed jinja2.Environment allows attackers to inject malicious Jinja2 template code that executes arbitrary commands when the template is rendered during chat interactions.

Critical Impact

Attackers can achieve remote code execution by distributing malicious .gguf model files containing crafted Jinja2 payloads in their metadata. When users load these poisoned models, arbitrary code executes on the host system with the privileges of the running process.

Affected Products

  • llama-cpp-python (versions prior to the security patch)
  • Applications using llama-cpp-python to load untrusted .gguf model files
  • AI/ML pipelines that automatically process user-supplied or third-party model files

Discovery Timeline

  • 2024-05-14 - CVE-2024-34359 published to NVD
  • 2024-11-21 - Last updated in NVD database

Technical Details for CVE-2024-34359

Vulnerability Analysis

The vulnerability stems from a fundamental security oversight in how llama-cpp-python handles Jinja2 template processing. The library extracts chat template strings from the metadata section of .gguf model files and processes them through Jinja2's template engine without implementing any sandboxing mechanisms.

Jinja2 templates are powerful and can access Python objects, call methods, and traverse object hierarchies. When an attacker controls the template content, they can craft payloads that escape the template context and execute arbitrary Python code. This is a well-known attack vector that requires careful sandboxing when processing untrusted templates.

The attack surface is particularly concerning in the AI/ML ecosystem where model sharing is common. Users frequently download models from community repositories, model hubs, and third-party sources. A malicious actor could distribute seemingly legitimate .gguf models containing weaponized chat templates.

Root Cause

The root cause is the use of an unsandboxed jinja2.Environment when parsing chat templates from model metadata. The Jinja2ChatFormatter class directly processes template strings without restricting access to dangerous Python objects, built-in functions, or module imports. This violates the security principle of never trusting user-controlled input, especially when that input is interpreted as executable code.

CWE-76 (Improper Neutralization of Equivalent Special Elements) accurately categorizes this vulnerability, as the application fails to properly neutralize Jinja2 template syntax that can be interpreted as code execution commands.

Attack Vector

The attack is network-based and requires user interaction—specifically, a victim must load a malicious .gguf model file. The attack flow proceeds as follows:

  1. Attacker crafts a .gguf model file with a malicious Jinja2 template embedded in its metadata
  2. The poisoned model is distributed through model sharing platforms, social engineering, or supply chain compromise
  3. Victim downloads and loads the model using llama-cpp-python
  4. During initialization, the Llama class extracts the chat template from metadata
  5. The template is passed to Jinja2ChatFormatter without sandboxing
  6. When the chat handler is invoked (during prompt construction), the malicious template renders and executes arbitrary code

Jinja2 SSTI payloads typically leverage Python's object introspection capabilities to access dangerous classes like subprocess.Popen or os.system. A carefully constructed payload can chain through __mro__, __subclasses__, and __globals__ to reach code execution primitives.

For technical details on the vulnerability mechanism and patch implementation, refer to the GitHub Security Advisory GHSA-56xg-wfcc-g829.

Detection Methods for CVE-2024-34359

Indicators of Compromise

  • Unusual process spawning from Python processes running llama-cpp-python
  • Network connections initiated by model loading processes to unexpected destinations
  • Unexpected file system modifications during or after model loading
  • Presence of .gguf files with suspiciously large or complex metadata sections
  • Error logs showing Jinja2 template rendering failures with unusual template content

Detection Strategies

  • Monitor for child processes spawned by Python applications using llama-cpp-python
  • Implement file integrity monitoring on directories where model files are stored
  • Analyze .gguf model files for suspicious metadata content before loading
  • Use behavioral analysis to detect anomalous activity during model initialization
  • Deploy endpoint detection and response (EDR) solutions to identify code execution from template engines

Monitoring Recommendations

  • Enable verbose logging for llama-cpp-python applications to capture template processing events
  • Implement network segmentation for systems that process untrusted model files
  • Set up alerts for unexpected outbound connections from AI/ML workloads
  • Review model file provenance and implement model signing/verification where possible

How to Mitigate CVE-2024-34359

Immediate Actions Required

  • Update llama-cpp-python to the latest patched version immediately
  • Audit all .gguf model files currently in use for suspicious metadata
  • Restrict model loading to trusted sources only until patching is complete
  • Isolate systems running vulnerable versions from production networks
  • Review application logs for signs of exploitation

Patch Information

The vulnerability has been addressed in the llama-cpp-python repository. The fix implements proper sandboxing for Jinja2 template processing, preventing access to dangerous Python objects and methods during template rendering.

Apply the security patch by updating to the latest version of llama-cpp-python. The fix is available in commit b454f40a9a1787b2b5659cd2cb00819d983185df. For complete details, refer to the GitHub Security Advisory.

Workarounds

  • Only load .gguf models from trusted and verified sources
  • Run llama-cpp-python in a sandboxed environment (containers, VMs) with minimal privileges
  • Implement network isolation for systems processing untrusted models
  • Disable or remove chat template functionality if not required for your use case
  • Use application-level firewalls to restrict outbound connections from model processing workloads
bash
# Update llama-cpp-python to the latest patched version
pip install --upgrade llama-cpp-python

# Verify the installed version includes the security fix
pip show llama-cpp-python | grep Version

# Run model processing in an isolated container with minimal privileges
docker run --rm --read-only --network=none -v /path/to/trusted/models:/models:ro llama-container

Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.

  • Vulnerability Details
  • TypeRCE

  • Vendor/TechLlama Cpp Python

  • SeverityCRITICAL

  • CVSS Score9.6

  • EPSS Probability59.17%

  • Known ExploitedNo
  • CVSS Vector
  • CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:C/C:H/I:H/A:H
  • Impact Assessment
  • ConfidentialityLow
  • IntegrityHigh
  • AvailabilityHigh
  • CWE References
  • CWE-76
  • Technical References
  • GitHub Commit Update

  • GitHub Security Advisory GHSA-56xg-wfcc-g829
  • Latest CVEs
  • CVE-2026-35467: Browser API Key Information Disclosure

  • CVE-2026-35466: cveInterface.js XSS Vulnerability

  • CVE-2026-30252: ZenShare Suite XSS Vulnerability

  • CVE-2026-30251: ZenShare Suite v17.0 XSS Vulnerability
Experience the World’s Most Advanced Cybersecurity Platform

Experience the World’s Most Advanced Cybersecurity Platform

See how our intelligent, autonomous cybersecurity platform can protect your organization now and into the future.

Try SentinelOne
  • Get Started
  • Get a Demo
  • Product Tour
  • Why SentinelOne
  • Pricing & Packaging
  • FAQ
  • Contact
  • Contact Us
  • Customer Support
  • SentinelOne Status
  • Language
  • Platform
  • Singularity Platform
  • Singularity Endpoint
  • Singularity Cloud
  • Singularity AI-SIEM
  • Singularity Identity
  • Singularity Marketplace
  • Purple AI
  • Services
  • Wayfinder TDR
  • SentinelOne GO
  • Technical Account Management
  • Support Services
  • Verticals
  • Energy
  • Federal Government
  • Finance
  • Healthcare
  • Higher Education
  • K-12 Education
  • Manufacturing
  • Retail
  • State and Local Government
  • Cybersecurity for SMB
  • Resources
  • Blog
  • Labs
  • Case Studies
  • Videos
  • Product Tours
  • Events
  • Cybersecurity 101
  • eBooks
  • Webinars
  • Whitepapers
  • Press
  • News
  • Ransomware Anthology
  • Company
  • About Us
  • Our Customers
  • Careers
  • Partners
  • Legal & Compliance
  • Security & Compliance
  • Investor Relations
  • S Foundation
  • S Ventures

©2026 SentinelOne, All Rights Reserved.

Privacy Notice Terms of Use

English