Cybercriminals are increasingly leveraging artificial intelligence (AI) to create sophisticated spear phishing attacks, evidenced by a $25 million fraud involving deepfake technology. A Hong Kong finance worker was deceived into sending funds after being convinced of a videoconference with the company's CFO. In response, cybersecurity firms are using AI to identify deepfakes. Additionally, vulnerabilities within AI companies have emerged, as seen by a research team discovering an unsecured database at DeepSeek AI, exposing sensitive information, which underscores the need for robust security measures amidst rising adversarial tactics.
Cybercriminals increasingly exploit AI for personalized spear phishing attacks, exemplified by a $25 million fraud facilitated through deepfake technology.
AI's role in cybersecurity is evolving as well; firms are employing AI to analyze video data and detect deepfake threats.
Wiz Research discovered an unsecured database at DeepSeek AI, revealing sensitive data including chat histories and API secrets, posing significant risks.
The exposure allowed unauthorized control over DeepSeek’s database, highlighting vulnerabilities within AI companies amidst a competitive landscape.
Collection
[
|
...
]