CVE-2025-3199 Overview
A critical improper authorization vulnerability has been identified in ageerle ruoyi-ai versions up to 2.0.1. The vulnerability affects the API Interface component, specifically within the SysModelController.java file located at ruoyi-modules/ruoyi-system/src/main/java/org/ruoyi/system/controller/system/SysModelController.java. This flaw allows unauthenticated remote attackers to manipulate model information without proper authorization checks, potentially compromising the integrity and confidentiality of the AI system's configuration.
Critical Impact
Unauthenticated attackers can remotely modify model information in ruoyi-ai deployments, enabling unauthorized access to sensitive AI model configurations and potential system manipulation.
Affected Products
- pandarobot ruoyi_ai versions up to 2.0.1
- Ruoyi-AI SysModelController API Interface
- ruoyi-modules/ruoyi-system component
Discovery Timeline
- 2025-04-04 - CVE-2025-3199 published to NVD
- 2025-12-08 - Last updated in NVD database
Technical Details for CVE-2025-3199
Vulnerability Analysis
This vulnerability is classified as CWE-266 (Incorrect Privilege Assignment), which occurs when a product incorrectly assigns privileges to a user or entity. In the case of ruoyi-ai, the SysModelController.java file exposes API endpoints that handle model management operations without implementing proper authentication and authorization checks. This allows any remote attacker to access and modify sensitive model configurations without logging in to the system.
The vulnerability is particularly concerning for organizations deploying ruoyi-ai as part of their AI infrastructure, as unauthorized modifications to AI models could lead to data integrity issues, unauthorized data access, or manipulation of AI behavior for malicious purposes.
Root Cause
The root cause of this vulnerability lies in missing permission validation on the model management API endpoints in SysModelController.java. The controller exposes functionality such as model export without requiring authentication or proper role-based access control. The original implementation failed to include the @SaCheckPermission annotation that enforces authorization checks before allowing access to sensitive operations.
Attack Vector
The vulnerability can be exploited remotely over the network. An attacker does not require any prior authentication or user interaction to exploit this flaw. The attack scenario involves:
- Identifying a ruoyi-ai deployment accessible over the network
- Sending crafted HTTP requests directly to the vulnerable API endpoints
- Bypassing authentication to modify model information or export sensitive data
- Potentially leveraging unauthorized access to compromise AI model integrity
The following patch demonstrates the security fix applied to address this vulnerability:
/**
* 导出系统模型列表
*/
+ @SaCheckPermission("system:model:export")
@Log(title = "系统模型", businessType = BusinessType.EXPORT)
@PostMapping("/export")
public void export(SysModelBo bo, HttpServletResponse response) {
Source: GitHub Commit Details
The fix adds the @SaCheckPermission("system:model:export") annotation to enforce proper authorization before allowing the export operation, ensuring only authenticated users with the appropriate permission can access this functionality.
Detection Methods for CVE-2025-3199
Indicators of Compromise
- Unexpected API requests to /system/model/export or similar model management endpoints from unauthenticated sources
- Anomalous modifications to AI model configurations without corresponding authenticated user sessions
- Log entries showing model operations without associated authentication tokens
- Increased API traffic to SysModelController endpoints from external IP addresses
Detection Strategies
- Monitor HTTP access logs for requests to model management API endpoints lacking authentication headers
- Implement Web Application Firewall (WAF) rules to detect unauthenticated access attempts to sensitive API routes
- Review application logs for model export or modification events that lack corresponding user authentication records
- Deploy API gateway monitoring to flag requests to sensitive endpoints without valid session tokens
Monitoring Recommendations
- Enable detailed logging on all SysModelController API endpoints to capture request metadata
- Configure alerting for any model modification events outside normal business hours or from unexpected IP ranges
- Implement real-time monitoring for authentication bypass attempts on protected resources
- Regularly audit API access patterns to identify anomalous behavior indicative of exploitation
How to Mitigate CVE-2025-3199
Immediate Actions Required
- Upgrade ruoyi-ai to version 2.0.2 or later immediately to apply the security fix
- If immediate upgrade is not possible, restrict network access to the SysModelController API endpoints
- Implement network-level access controls to limit exposure of the vulnerable API interface
- Review system logs for any evidence of prior exploitation attempts
Patch Information
The vulnerability has been addressed in ruoyi-ai version 2.0.2. The fix is available in commit c0daf641fb25b244591b7a6c3affa35c69d321fe. Organizations running affected versions should upgrade to the patched version immediately. Additional details are available in the GitHub Release v2.0.2 and the GitHub Commit Details.
Workarounds
- Deploy a reverse proxy or API gateway with authentication enforcement in front of the ruoyi-ai application
- Implement network segmentation to restrict access to the vulnerable API endpoints from untrusted networks
- Add custom middleware or filter to enforce authentication on SysModelController routes
- Temporarily disable the model export functionality if not required for business operations
# Example: Restrict access to vulnerable endpoints using nginx
location /system/model/ {
# Deny all external access until patched
allow 10.0.0.0/8;
allow 192.168.0.0/16;
deny all;
# Require authentication header
if ($http_authorization = "") {
return 401;
}
proxy_pass http://ruoyi-backend;
}
Disclaimer: This content was generated using AI. While we strive for accuracy, please verify critical information with official sources.


