
"The Defense Department designated the American AI startup as a supply chain risk after the AI company refused to give it unrestricted access to its tech for applications the company said its AI could not safely support, such as mass surveillance and fully autonomous weapons."
"Our lawyers have studied the designation and have concluded that Anthropic products, including Claude, can remain available to our customers - other than the Department of War - through platforms such as M365, GitHub, and Microsoft's AI Foundry, and that we can continue to work with Anthropic on non-defense related projects."
"The supply-chain risk designation is typically reserved for foreign adversaries. For Anthropic, the designation means that the Pentagon can't use the company's products - and also requires any company or agency that works with the Pentagon to certify that they don't use Anthropic's models, either."
The Defense Department designated Anthropic as a supply chain risk after the AI company refused to provide unrestricted access to its technology for applications like mass surveillance and autonomous weapons. This designation typically applies to foreign adversaries and prevents the Pentagon from using Anthropic's products while requiring Pentagon contractors to certify non-use of Anthropic models. Microsoft, which serves many federal agencies including the Defense Department, confirmed that it will continue offering Anthropic's Claude models to customers through platforms like M365, GitHub, and Microsoft's AI Foundry, excluding only direct Department of Defense use. Microsoft's legal team determined the designation does not prohibit this arrangement for non-defense projects.
Read at TechCrunch
Unable to calculate read time
Collection
[
|
...
]