US Defense Department takes issue with Anthropic over ethical stance
Briefly

US Defense Department takes issue with Anthropic over ethical stance
"The US Department of Defense is on a collision course with Anthropic, which may prove bad news for the AI company. According to political website, The Hill, the DoD is currently examining the terms of its relationship. The issue is that Anthropic is holding an ethical line on the use of its Claude model and wants to ensure that its tools are not used to develop weaponry that fires without human input and that they are not used for mass surveillance of Americans."
"The issue is that Anthropic is holding an ethical line on the use of its Claude model and wants to ensure that its tools are not used to develop weaponry that fires without human input and that they are not used for mass surveillance of Americans. But this has not been met with favor by the Pentagon. "The Department of War's relationship with Anthropic is being reviewed."
The US Department of Defense is reviewing its relationship with Anthropic because Anthropic restricts use of its Claude model for certain military and surveillance purposes. Anthropic refuses to allow its tools to be used to develop weaponry that fires without human input and to enable mass surveillance of Americans. The Pentagon regards those restrictions unfavorably and is reassessing partnerships to ensure contractors assist warfighters. A Pentagon spokesperson framed the review as necessary for troop effectiveness and public safety. The disagreement centers on ethical limits imposed by Anthropic versus the DoD's operational requirements.
Read at Computerworld
Unable to calculate read time
[
|
]