Science
fromThe Cipher Brief
8 hours agoA Wartime Budget Without an Innovation Strategy
Collaboration between the NSF and defense sectors is essential for national security and innovation, despite proposed budget cuts to NSF funding.
Seth Moulton described Polymarket's acceptance of bets on the downed pilots' fate as DISGUSTING, emphasizing that their safety was unknown while people were betting on their rescue.
CEO Chris Calio emphasized the urgency of delivering critical products for national security, stating, 'We understand that our products are critical to national security. And I can tell you across the organization, we absolutely feel the responsibility and urgency to deliver more and to deliver it faster.'
When civilian banks, logistics platforms, and payment processors share physical data center infrastructure with military AI systems, those facilities become legitimate military targets under international humanitarian law - and the civilian services housed inside lose their legal protection.
The conduct of War is, therefore, the formation and conduct of the fighting. If this fighting was a single act, there would be no necessity for any further subdivision, but the fight is composed of a greater or less number of single acts, complete in themselves, which we call combats, as we have shown in the first chapter of the first book, and which form new units.
From what I have seen on open-source intelligence, the Israelis had essentially developed a capability to tap existing public CCTV networks in Tehran and then layered on top of that, a bunch of data integration software that enable them to build targeting packages on senior leaders. My sense is that there was a US-sourced piece of humint that was then able to be fed into that model.
I have been working in Ukraine since 2019, first as an active Green Beret advising in an official capacity, then after leaving that service, directing special operations on the ground and more recently carrying hard-won lessons back to NATO before they are forgotten or overtaken by the next news cycle.
When the user asks "What enemy military unit is in the region?" the AIP Assistant guesses that it's "likely an armor attack battalion based on the pattern of the equipment." This prompts the analyst to request a MQ-9 Reaper drone to survey the scene. They then ask the AIP Assistant to "generate 3 courses of action to target this enemy equipment," and within moments, the assistant suggests attacking the unit with either an "air asset," a "long range artillery," or a "tactical team."
For years, Anthropic has distinguished itself from peers by embracing a safety-first stance. Its flagship model, Claude, was designed with guardrails that explicitly prohibit use in fully autonomous lethal weapons or domestic surveillance. Those restrictions have been central to the company's identity and its appeal to customers wary of unfettered AI.
So we've gone from identifying the target to now coming up with a course of action, to now actioning that target, all from one system. This is revolutionary. We were having this done in about eight or nine systems where humans were literally moving detections left and right in order to get to our desired end state, in this case closing a kill chain.
On paper, many of the world's most famous weapons looked like reliable successes. In practice, desert sand, jungle humidity, and arctic cold often had other ideas. Systems that performed well in testing or early combat sometimes broke down once environmental stress became unavoidable. Here, 24/7 Wall St. is taking a closer look at how the environment, not enemy fire, can quietly expose limits that designers never fully anticipated.