Can the Military Prevent AI From Going Full Terminator?
Briefly

Can the Military Prevent AI From Going Full Terminator?
"There are laws on the books that deal with mass surveillance and population surveillance and autonomous weapons, but there's some nuance there that is still unaddressed. There is a narrow but important gap between the Department's "all lawful use" stipulation in contracts and Anthropic's preferred "no autonomous weapons." On the one hand, you could interpret these two positions as being essentially aligned. But autonomous weapons are not covered in U.S. law in the way you may think."
The Pentagon terminated its relationship with Anthropic after the company refused to allow Claude for autonomous warfare or mass surveillance applications. Hours later, OpenAI secured a Pentagon deal while asserting it maintained the same ethical restrictions. Simultaneously, U.S. military operations against Iran reportedly utilized Claude during combat operations. A Georgetown security researcher explains that while laws address mass surveillance and autonomous weapons, significant gaps exist between the Pentagon's "all lawful use" contract language and Anthropic's "no autonomous weapons" position. Current U.S. autonomous weapons policy requires human judgment involvement, with Congress notification required only if policy changes occur.
Read at Intelligencer
Unable to calculate read time
[
|
]