The Pentagon Is Pushing Anthropic to Make the Most Evil A.I. Possible. Will It?
Briefly

The Pentagon Is Pushing Anthropic to Make the Most Evil A.I. Possible. Will It?
"Anthropic, the maker of Claude, wants to be seen as the major A.I. company most focused on safety. The company has spent a lot of time telling reporters about its commitment to developing A.I. to be as ethical and helpful as possible. Scenarios in which Claude destroys things have seemingly been top of mind for Anthropic's researchers."
"The federal government wants Anthropic to hand unrestricted access to its tools to the Department of Defense. Anthropic has tried to condition its services in two ways: One, it can't be used to build autonomous weapons that could fire without human oversight; and two, it can't be used for mass surveillance of American citizens."
"U.S. Secretary of Defense Pete Hegseth has threatened to categorize Anthropic as a 'supply chain risk,' a move that could blacklist the company from the government and its contractors. At the same time, the government has reportedly considered invoking the Defense Production Act in an effort to force Anthropic to hand over what Hegseth wants."
Anthropic, positioned as the AI company most focused on safety, faces a critical test of its principles. The Department of Defense demands unrestricted access to Claude, while Anthropic has conditioned its services with two restrictions: no autonomous weapons without human oversight and no mass surveillance of Americans. The Defense Department has not explained why these safeguards are unacceptable. Secretary of Defense Pete Hegseth has threatened to classify Anthropic as a supply chain risk, potentially blacklisting the company, and the government has considered invoking the Defense Production Act to force compliance. Anthropic's leadership must decide by Friday whether to maintain its safety commitments or yield to government pressure.
Read at Slate Magazine
Unable to calculate read time
[
|
]