
"OpenAI is grappling with its own contract with the U.S. Department of War, amending it to say its systems 'shall not be intentionally used for domestic surveillance of U.S. persons and nationals'. The turmoil follows Anthropic being declared by government to be a supply chain risk, as the firm states that it resisted attempts to allow its services to be used for 'surveillance of Americans and fully autonomous weapons'."
"Now, it is trying to come to a compromise, a bizarre turn of events as the U.S. Central Command apparently used Anthropic's Claude AI tool as part of a new offensive. It has been speculated that because Anthropic restricts military contractors from using the technology, then Palantir will also have to stop using the firm's technology."
OpenAI is amending its Department of War contract to prohibit intentional domestic surveillance of U.S. persons. Anthropic was designated a supply chain risk after resisting pressure to enable surveillance and autonomous weapons capabilities. Despite these restrictions, U.S. Central Command reportedly used Anthropic's Claude AI in offensive operations. Anthropic's technology restrictions may force military contractors like Palantir to cease using their systems. These developments highlight tensions between major technology companies' stated ethical commitments and their involvement in military applications, raising questions about the implications of Big Tech's military engagement.
#ai-military-contracts #domestic-surveillance-restrictions #autonomous-weapons #big-tech-defense-partnerships #supply-chain-risk
Read at Privacy International
Unable to calculate read time
Collection
[
|
...
]