
"In December 2024, the popular Ultralytics AI library was compromised, installing malicious code that hijacked system resources for cryptocurrency mining. In August 2025, malicious Nx packages leaked 2,349 GitHub, cloud, and AI credentials. Throughout 2024, ChatGPT vulnerabilities allowed unauthorized extraction of user data from AI memory. The result: 23.77 million secrets were leaked through AI systems in 2024 alone, a 25% increase from the previous year."
"The major security frameworks organizations rely on, NIST Cybersecurity Framework, ISO 27001, and CIS Control, were developed when the threat landscape looked completely different. NIST CSF 2.0, released in 2024, focuses primarily on traditional asset protection. ISO 27001:2022 addresses information security comprehensively but doesn't account for AI-specific vulnerabilities. CIS Controls v8 covers endpoint security and access controls thoroughly-yet none of these frameworks provide specific guidance on AI attack vectors."
In December 2024, the Ultralytics AI library was compromised, installing malicious code that hijacked system resources for cryptocurrency mining. In August 2025, malicious Nx packages leaked 2,349 GitHub, cloud, and AI credentials. ChatGPT vulnerabilities throughout 2024 allowed unauthorized extraction of user data from AI memory. A total of 23.77 million secrets were leaked through AI systems in 2024, a 25% increase from the previous year. Affected organizations had comprehensive security programs, passed audits, and met compliance requirements. Major frameworks like NIST CSF, ISO 27001, and CIS Controls lack AI-specific guidance. AI introduces attack surfaces that do not map to existing control families.
Read at The Hacker News
Unable to calculate read time
Collection
[
|
...
]