Microsoft Is Suing People Who Did Bad Things With Its AI
Briefly

Microsoft has amended its lawsuit to include four developers from the Storm-2139 cybercrime network, who allegedly exploited the company's AI tools to create illegal deepfake content, including celebrity pornography. These developers are categorized into three groups: creators who developed tools for abuse, providers who modified these tools, and users who generated synthetic content. Initially filed in December under 'John Doe' defendants, the lawsuit now reveals specific names in light of new evidence, emphasizing Microsoft's aim to deter future cybercrime linked to AI misuse.
Microsoft's recent lawsuit update names four developers who allegedly misused its AI tools for creating deepfaked content, highlighting issues of cybersecurity and ethical AI usage.
The individuals linked to the Storm-2139 cybercrime network exemplify a growing problem in tech, where malicious use of AI tools leads to harmful and unlawful content generation.
Read at Futurism
[
|
]