Skip to main content

Researchers at Protect AI have discovered 34 vulnerabilities in various open-source Artificial Intelligence (AI) and Machine Learning (ML) models.

In its latest October report, Protect AI provided a comprehensive overview of the vulnerabilities.

The findings were uncovered through Protect AI’s innovative bug bounty program, Huntr. This program engages a community of over 15,000 members to identify vulnerabilities within the open-source AI/ML supply chain.

The researchers highlighted four models, along with their corresponding vulnerabilities, as follows:

  1. Lunary:
    Lunary, a toolkit designed for Large Language Models (LLMs), has been found to have two significant vulnerabilities rated at a CVSS score of 9.1:
    • CVE-2024-7474 (Insecure Direct Object Reference): This vulnerability allows authenticated users to view or delete other users’ data. This is due to the application’s failure to properly validate user-controlled ID values, potentially leading to unauthorised access to sensitive data.
    • CVE-2024-7475 (Improper Access Control): This flaw allows attackers to alter the Security Assertion Markup Language (SAML) configuration, making it possible to log in as an unauthorised user and access confidential information.

In addition to these, another IDOR vulnerability with a CVSS score of 7.5 was identified:

    • CVE-2024-7473: This vulnerability enables attackers to modify other users’ prompts by manipulating user-controlled parameters. An attacker could intercept requests and change the prompt IDs to those belonging to other users, enabling unauthorised updates.

2. ChuanhuChatGPT:

This chatbot model, used for simulating human-like conversation, was found to be vulnerable to:

    • CVE-2024-5982 (Path Traversal): This critical vulnerability with a CVSS score of 9.1 exists in the user upload function. Attackers can exploit this by manipulating file paths to access restricted directories. This could lead to arbitrary code execution, directory manipulation, and exposure of sensitive data.

3. LocalAI:

This platform that allows users to run AI models locally is vulnerable to the following:

  • CVE-2024-6983 (Remote Code Execution): This vulnerability enables malicious actors to execute arbitrary code by uploading a malicious configuration file. The vulnerability has a CVSS score of 8.8.
  • CVE-2024-7010 (Timing Attack): This vulnerability allows attackers to exploit response time analysis to deduce valid API keys. By measuring processing times for different keys, attackers can gradually uncover the correct API key, revealing valid keys one character at a time. The vulnerability was rated with a CVSS score of 7.5.

4. Deep Java Library (DJL):

DJL is a Java-based library designed for developing and deploying deep learning models, simplifying the integration of AI into Java applications. The following vulnerability was identified in DJL:

  • CVE-2024-8396 (Arbitrary File Overwrite & RCE): This vulnerability allows an attacker to exploit the DJL package’s untar function, leading to arbitrary file overwrite and potential remote code execution. The CVSS score is 7.8.

Protect AI advises organisations to take the following actions to mitigate the vulnerabilities:

  1. Upgrade any vulnerable software versions listed in the report to the latest versions.
  2. Use Protect AI’s detection and remediation tools, available on its Sightline platform.
  3. Seek professional assistance from Protect AI’s team if any identified vulnerabilities impact active production environments.

Protect AI is positioning itself as a frontrunner in AI security. In addition to its bug bounty platform, the company launched Vulnhuntr this month, an open-source static code analyzer that leverages AI to detect zero-day vulnerabilities in Python codebases. 

Protect AI is inviting bug bounty hunters to test projects on Huntr. This initiative is beneficial as it will lead to the identification of more vulnerabilities, ultimately enhancing AI security.

About the author: