CVE-2024-5182 Overview
A path traversal vulnerability exists in mudler/LocalAI version 2.14.0, where an attacker can exploit the model parameter during the model deletion process to delete arbitrary files on the system. By crafting a malicious request with a manipulated model parameter containing directory traversal sequences, an attacker can navigate outside the intended directory structure and target critical system files or application data for deletion. This vulnerability stems from insufficient input validation and sanitization of the model parameter in the file deletion functionality.
Critical Impact
Remote attackers can delete arbitrary files on the server without authentication, potentially causing complete system compromise, data loss, or denial of service by removing critical application or system files.
Affected Products
- mudler LocalAI version 2.14.0
- mudler LocalAI versions prior to the security patch
Discovery Timeline
- 2024-06-20 - CVE-2024-5182 published to NVD
- 2024-11-21 - Last updated in NVD database
Technical Details for CVE-2024-5182
Vulnerability Analysis
This path traversal vulnerability (CWE-22) allows attackers to manipulate the model parameter in the model deletion API endpoint to escape the designated model directory and access arbitrary file paths on the server. The vulnerability is exploitable remotely over the network without requiring authentication or user interaction, making it particularly dangerous for exposed LocalAI instances.
The attack mechanism involves injecting path traversal sequences such as ../ into the model parameter. When the application processes a delete request, it fails to properly validate or sanitize this input, allowing the traversal sequences to navigate up the directory tree and target files in unintended locations.
Root Cause
The root cause is insufficient input validation and sanitization of user-supplied input in the model parameter. The application does not implement proper path canonicalization or boundary checks to ensure that file operations remain within the intended model storage directory. This allows specially crafted input containing directory traversal sequences to escape the sandboxed directory and affect arbitrary files on the filesystem.
Attack Vector
The attack is network-based and can be executed by sending crafted HTTP requests to the LocalAI model deletion endpoint. An attacker would:
- Identify an exposed LocalAI instance running version 2.14.0 or earlier
- Craft a DELETE request with a model parameter containing traversal sequences (e.g., ../../etc/passwd or similar paths)
- The server processes the request and deletes the targeted file outside the model directory
The referenced commit shows dependency updates that were part of the broader security fix:
accelerate
auto-gptq==0.7.1
-grpcio==1.63.0
+grpcio==1.64.0
protobuf
torch
certifi
Source: GitHub Commit Update
Detection Methods for CVE-2024-5182
Indicators of Compromise
- HTTP requests to model deletion endpoints containing path traversal sequences (../, ..%2f, %2e%2e/)
- Unexpected file deletions in system directories or outside the LocalAI model storage path
- Audit logs showing DELETE operations with suspicious file paths in the request parameters
- Missing critical system or application files without legitimate administrative action
Detection Strategies
- Implement web application firewall (WAF) rules to detect and block path traversal patterns in request parameters
- Monitor LocalAI API logs for DELETE requests containing directory traversal sequences
- Deploy file integrity monitoring (FIM) on critical system files to detect unauthorized deletions
- Review HTTP access logs for anomalous patterns targeting the model deletion endpoint
Monitoring Recommendations
- Enable verbose logging for all LocalAI API endpoints, particularly DELETE operations
- Configure alerts for any file system operations outside the designated model directory
- Monitor for sudden changes in disk space or missing files that could indicate exploitation
- Implement network-level monitoring for suspicious traffic patterns to LocalAI services
How to Mitigate CVE-2024-5182
Immediate Actions Required
- Upgrade LocalAI to the latest patched version immediately
- Restrict network access to LocalAI instances using firewall rules or network segmentation
- Review system logs for any signs of prior exploitation
- Implement input validation at the network perimeter using a WAF with path traversal detection rules
Patch Information
The vulnerability has been addressed in a security update from mudler. The fix implements proper input validation and path sanitization for the model parameter. Organizations should apply the patch available at the GitHub Commit. Additional details about the vulnerability and the fix can be found in the Huntr Vulnerability Bounty report.
Workarounds
- Place LocalAI behind a reverse proxy that validates and sanitizes input parameters before forwarding requests
- Restrict the LocalAI service to run with minimal file system permissions, limiting the scope of potential damage
- Implement network-level access controls to limit which hosts can access the LocalAI API
- Use container isolation to restrict file system access to only necessary directories
# Example: Restrict LocalAI network access using iptables
# Allow only trusted internal networks to access LocalAI (default port 8080)
iptables -A INPUT -p tcp --dport 8080 -s 192.168.1.0/24 -j ACCEPT
iptables -A INPUT -p tcp --dport 8080 -j DROP
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.

