The various vulnerabilities disclosed in open-source AI and ML models raise substantial risks, including remote code execution and information theft, with severe flaws identified in Lunary, ChuanhuChatGPT, and LocalAI.
CVE-2024-7474 and CVE-2024-7475, both scoring 9.1 on the CVSS scale, present severe risks, allowing attackers to access unauthorized data and log in as other users.
Protect AI highlighted that an attacker can manipulate a user-controlled parameter to update prompts belonging to another user, exemplifying the serious nature of IDOR vulnerabilities.
ChuanhuChatGPT's CVE-2024-5982 involves a path traversal flaw that could enable arbitrary code execution and the exposure of sensitive data, marking it as a high-security concern.
Collection
[
|
...
]