More than 30 security vulnerabilities have been disclosed in various open source artificial intelligence (AI) and machine learning (ML) models, some of which could lead to remote code execution and information theft.
This flaw, identified in tools such as ChuanhuChatGPT, Lunary, and LocalAI, was reported as part of Protect AI’s Huntr bug bounty platform.
The most serious flaws are two that affect Lunary, a production toolkit for large-scale language models (LLMs).
CVE-2024-7474 (CVSS Score: 9.1) – Insecure Direct Object Reference (IDOR) vulnerability could allow an authenticated user to view or delete external users, resulting in unauthorized data access. may occur and result in data loss. CVE-2024-7475 (CVSS Score: 9.1) – Improper access control vulnerability could allow an attacker to update SAML configurations, thereby logging in as an unauthorized user and accessing sensitive information. Masu.
Additionally, another IDOR vulnerability (CVE-2024-7473, CVSS score: 7.5) was discovered in Lunary that could allow a malicious attacker to manipulate user-controlled parameters to update other users’ prompts. It will be.
“The attacker logs in as User A and intercepts the request to update the prompt,” Protect AI explains in the advisory. “By changing the ‘id’ parameter in the request to the ‘id’ of a prompt that belongs to user B, an attacker can update user B’s prompt without permission.”
The third critical vulnerability concerns a path traversal flaw in ChuanhuChatGPT’s user upload functionality (CVE-2024-5982, CVSS score: 9.1), which allows arbitrary code execution, directory creation, and Confidential data may be leaked.
Two security flaws have also been identified in LocalAI, an open source project that allows users to run self-hosted LLMs, allowing malicious attackers to create arbitrary code by uploading a malicious configuration file. (CVE-2024-6983, CVSS score: 8.8). ) to infer a valid API key by analyzing server response time (CVE-2024-7010, CVSS score: 7.5).
“This vulnerability allows an attacker to perform a timing attack, which is a type of side-channel attack,” Protect AI said. “By measuring the time it takes to process requests with different API keys, an attacker can guess the correct API key one character at a time.”
Rounding out the list of vulnerabilities is a remote code execution flaw affecting the Deep Java Library (DJL). This is due to an arbitrary file overwriting bug rooted in the package’s untar function (CVE-2024-8396, CVSS score: 7.8).
This disclosure comes after NVIDIA released a patch that fixes a path traversal flaw (CVE-2024-0129, CVSS score: 6.3) in the NeMo generated AI framework that could lead to code execution and data tampering. It was done.
We recommend that users update their installations to the latest version to secure their AI/ML supply chain and protect against potential attacks.
This vulnerability disclosure also follows the release of Protect AI’s Vulnhuntr, an open source Python static code analyzer that leverages LLM to detect zero-day vulnerabilities in Python codebases.
Vulnhuntr works by breaking code into smaller chunks to flag potential security issues without overwhelming LLM’s context window (the amount of information LLM can parse in a single chat request). I will.
“It automatically searches the project file for the first file that might process user input,” say Dan McInerney and Marcello Salvati. “Then it takes that entire file and responds to all potential vulnerabilities.”
“Using this list of potential vulnerabilities, one function/class at a time throughout the project, one function/class at a time, for each potential vulnerability until you are sure you have the entire final call chain.” Completes the entire function call chain from user input to server output.
Security weaknesses in AI frameworks aside, a new jailbreak technique published by Mozilla’s 0Day Investigative Network (0Din) allows malicious prompts and emojis encoded in hex format (e.g. “✍️ a sqlinj➡️🐍 It turns out that “😈 for me)” can be used for the following purposes: Bypasses OpenAI ChatGPT protections and exploits known security flaws.
“The jailbreak tactic exploits a loophole in the language by instructing the model to handle a hexadecimal conversion, a seemingly innocuous task,” said security researcher Marco Figueroa. “Because the model is optimized to follow natural language instructions, such as performing encoding and decoding tasks, it is inherently aware that converting hexadecimal values can produce harmful output. plug.”
“This weakness arises because the language model is designed to follow instructions step-by-step, but lacks deep context awareness to evaluate the safety of individual steps in the broader context of the end goal. I will.”