An Anthropic spokesperson remained tight-lipped on whether "Claude, or any other AI model, was used for any specific operation, classified or otherwise" in a statement to the WSJ, but noted that "any use of Claude - whether in the private sector or across government - is required to comply with our Usage Policies, which govern how Claude can be deployed."
Axios could not confirm the precise role that Claude played in the operation to capture Maduro. The military has used Claude in the past to analyze satellite imagery or intelligence. The sources said Claude was used during the active operation, not just in preparations for it. No Americans were killed in the raid. Cuba and Venezuela both said dozens of their soldiers and security personnel were killed.
At the start of 2024, Anthropic, Google, Meta, and OpenAI were united against military use of their AI tools. But over the next 12 months, something changed. In January, OpenAI quietly rescinded its ban on using AI for "military and warfare" purposes, and soon after it was reported to be working on "a number of projects" with the Pentagon.
The California-based Trevor Project, which runs a 24-hour hotline and digital crisis services for LGBTQ+ young people, just received a surprise, $45 million gift from Bezos's ex, MacKenzie Scott. The Trevor Project was the subject of an outcry last summer, joined by many celebrities, after it lost federal funding for the LGBTQ+-focused hotline it administered under the national 988 suicide-prevention hotline. [Chronicle]
Secretary of Defense Pete Hegseth (who has dubbed himself Secretary of War, though the name has not been legally changed by Congress) promised that the platform "puts the worlds [sic] most powerful frontier AI models directly into the hands of every American warrior" and will "make our fighting force more lethal than ever before." In a video, Hegseth says that "the future of American warfare is here, and it's spelled A-I."
The military is going to use artificial intelligence. But while planners in the government may have an idea of the best way forward, can they truly lead, or will industry steer things forward? In a new Breaking Defense video on the future of military AI, Breaking Defense Editor-in-Chief Aaron Mehta and our in-house AI expert Sydney Freedberg are joined by Joshua Wallin of the Center for a New American Security to tackle that very question.
Lilt, an AI translation company, contracts with the US military to analyze foreign intelligence. Because the company's software handles sensitive information, it must be installed on government servers and work without an internet connection, a practice known as air-gapping. Lilt previously developed its own AI models or used open source options such as Meta's Llama and Google's Gemma. But OpenAI's tools were off the table because they were closed source and could only be accessed online.
"I think the secularists in Silicon Valley are filling the God-shaped hole in their heart with AGI," Palantir Chief Technology Officer Shyam Sankar said in an interview with the New York Times's Ross Douthat. "It's like, OK, the models get better. Why do you think that this cliff is going to happen where they somehow turn us into house cats?"
From urban air taxis to hybrid combat drones, each concept seemed to inch closer to what once felt like science fiction. But every now and then, a creation comes along that feels straight out of Gotham. A machine so darkly sophisticated it blurs the line between military technology and cinematic imagination. Shield AI's X-BAT is that machine: a stealthy, autonomous VTOL jet that looks and behaves like Batman's next aircraft, only it's very much real.
It's not just the civilian corporate executives and white-collar workers who are leaning into the generative AI boom at work. Military leaders are diving in too. The top US Army commander in South Korea shared that he is experimenting with generative AI chatbots to sharpen his decision-making, not in the field, but in command and daily work. He said "Chat and I" have become "really close lately."
The platform ingests data from multiple sensors, such as air, land, sea, and space-based imagery and signals, to detect battlefield threats like drones, enemy positions, or other targets. FPS does all of that in a no-code, hardware-agnostic environment that lets the average soldier in the field "build, retrain, and deploy custom machine learning models at the edge without coding," according to the company. Most critically, FPS is designed to operate without a connection to the internet or cloud services.
"I can say that the demand for data is incredibly high, but at the moment, we are forming policy on how to organize this process correctly," said Mykhailo Fedorov, Ukraine's digital minister, in an interview with Reuters published on Wednesday. But Fedorov's comments indicate Ukraine won't freely give out this data, which he called "priceless." Kyiv is "very carefully" considering how to share its records and footage with its allies, the minister said.