Why AI Companies Are Suddenly Worried About Theft
Briefly

Why AI Companies Are Suddenly Worried About Theft
"Three Chinese AI firms had been waging "industrial-scale campaigns" to "illicitly extract" proprietary information from Anthropic model Claude "to improve their own models." Using 24,000 fraudulent customer accounts to generate more than 16 million exchanges, these firms appeared to be using a technique called "distillation" to seize "powerful capabilities" from the company's products "in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently.""
"Google shared a report on its own concerns about rising "model extraction attempts," or "distillation attacks," carried out by "private sector entities all over the world and researchers seeking to clone proprietary logic." Around the same time, OpenAI sent a letter to legislators providing an "updated assessment" on the issue, stating it had observed activity "indicative of ongoing attempts by DeepSeek to distill frontier models of OpenAI and other US frontier labs, including through new, obfuscated methods.""
"Anthropic suggests that model scraping could be used to build powerful tools stripped of safeguards meant to "prevent state and non-state actors from using AI to, for example, develop bioweapons or carry out malicious cyber activities" and that "unprotected capabilities" could be used by "authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance.""
Anthropic, Google, and OpenAI have reported coordinated efforts by foreign entities and researchers to extract proprietary information from their AI models through distillation attacks. Chinese AI firms created over 24,000 fraudulent accounts generating 16 million exchanges to illicitly obtain Claude's capabilities. Google documented rising model extraction attempts from private sector entities and researchers worldwide. OpenAI identified DeepSeek's ongoing distillation efforts using obfuscated methods. These attacks enable competitors to replicate powerful AI capabilities in fraction of development time and cost. The companies warn that scraped models stripped of safety safeguards could enable state and non-state actors to develop bioweapons, conduct cyber operations, and deploy disinformation campaigns, requiring rapid coordinated action across industry and policymakers.
Read at Intelligencer
Unable to calculate read time
[
|
]