Multiple flaws, three critical reported in Open-Source AI and ML Models
Take action: Specific flaws in multiple AI/ML tools. Read up and check if you are affected, then consider updating the models, or implement a good isolation.
Learn More
Several vulnerabilities have been reported across various open-source AI and ML models, with potential impacts ranging from remote code execution to unauthorized data access. The vulnerabilities were identified through Protect AI’s Huntr bug bounty platform and are affecting tools like
- ChuanhuChatGPT,
- Lunary,
- LocalAI
Vulnerability Summary
-
CVE-2024-7474 (CVSS score 9.1) - Lunary IDOR Vulnerability - Allows an authenticated user to view or delete other users’ data, leading to unauthorized data access and potential data loss.
-
CVE-2024-7475 (CVSS score 9.1) - Lunary Improper Access Control - Enables attackers to update the SAML configuration, permitting unauthorized logins and access to sensitive data.
-
CVE-2024-5982 (CVSS score 9.1) - ChuanhuChatGPT Path Traversal Vulnerability - Allows arbitrary code execution, creation of directories, and potential exposure of sensitive data through a flawed user upload feature.
Other Significant Vulnerabilities
-
CVE-2024-6983 (CVSS score 8.8) - LocalAI Arbitrary Code Execution - Attackers can upload a malicious configuration file to trigger arbitrary code execution in this self-hosted LLM tool.
-
CVE-2024-7473 (CVSS score 7.5) - Lunary IDOR in Prompt Management - An attacker can manipulate a user-controlled parameter to update other users’ prompts without authorization, potentially altering sensitive data.
-
CVE-2024-7010 (CVSS score 7.5) - LocalAI Timing Attack for API Key Inference - side-channel attack allows attackers to infer valid API keys by analyzing server response times, enabling unauthorized access.
-
CVE-2024-8396 (CVSS score 7.8) - Deep Java Library (DJL) Remote Code Execution - arbitrary file overwrite bug in the package’s untar function, this flaw enables remote code execution.
-
CVE-2024-0129 (CVSS score 6.3) - NVIDIA NeMo Path Traversal Vulnerability - path traversal flaw in NVIDIA’s generative AI framework may result in code execution and data tampering.
Users should update their AI/ML installations to the latest versions to secure their supply chains and mitigate these vulnerabilities.