During the AI Impact Summit in India, the UK government announced that £27m is now available for AI alignment research, backing some 60 projects. The project combines grant funding for research, access to compute infrastructure and ongoing academic mentorship from AISI's own leading scientists in the field to drive progress in alignment research. Without continued progress in this area, increasingly powerful AI models could act in ways that are difficult to anticipate or control, which could pose challenges for global safety and governance.
It's gonna be something like 10 times the impact of the Industrial Revolution, but happening at 10 times the speed, probably unfolding in a matter of a decade rather than a century,
On a personal basis, that means people using AI services want to be able to veto big decisions such as making payments, accessing or using contact details, changing account details, placing orders, or even just seeking clarity during a decision-making process. Extend this way of thinking to the working environment and the resistance is likely to be equally strong in professional settings.
Anthropic pledged to keep its large language model, Claude, ad-free, alongside a commercial poking at OpenAI, which is testing ads in ChatGPT. OpenAI CEO Sam Altman fired back with a 420-word post on X, calling the ad "dishonest." Altman was already fending off rumors about OpenAI's relationship with Nvidia, after the Wall Street Journal reported the chipmaker was pulling back from a proposed $100 billion investment. Reuters' sources said OpenAI has been exploring alternatives to Nvidia's chips.
Moca has open-sourced Agent Definition Language (ADL), a vendor-neutral specification intended to standardize how AI agents are defined, reviewed, and governed across frameworks and platforms. The project is released under the Apache 2.0 license and is positioned as a missing "definition layer" for AI agents, comparable to the role OpenAPI plays for APIs. ADL provides a declarative format for defining AI agents, including their identity, role, language model setup, tools, permissions, RAG data access, dependencies, and governance metadata like ownership and version history.
But I wonder why Anthropic would go for something so clearly dishonest. Our most important principle for ads says that we won't do exactly this; we would obviously never run ads in the way Anthropic depicts them. We are not stupid and we know our users would reject that. I guess it's on brand for Anthropic doublespeak to use a deceptive ad to critique theoretical deceptive ads that aren't real, but a Super Bowl ad is not where I would expect it.
Today, I'm talking with Alex Lintner, who is the CEO of technology and software solutions at Experian, the credit reporting company. Experian is one of those multinationals that's so big and convoluted that it has multiple CEOs all over the world, so Alex and I spent quite a lot of time talking through the Decoder questions just so I could understand how Experian is structured, how it functions, and how the kinds of decisions Alex makes actually work in practice.
Political leaders could soon launch swarms of human-imitating AI agents to reshape public opinion in a way that threatens to undermine democracy, a high profile group of experts in AI and online misinformation has warned. The Nobel peace prize-winning free-speech activist, Maria Ressa, and leading AI and social science researchers from Berkeley, Harvard, Oxford, Cambridge and Yale are among a global consortium flagging the new disruptive threat posed by hard-to-detect, malicious AI swarms infesting social media and messaging channels.
One year ago this week, Silicon Valley and Wall Street were shocked by the release of China's DeepSeek mobile app, which rivaled US-based large language models like ChatGPT by showing comparable performance on key benchmarks at a fraction of the cost while using less-advanced chips. DeepSeek opened a new chapter in the US-China rivalry, with the world recognizing the competitiveness of Chinese AI models, and Beijing pouring more resources into developing its own AI ecosystem.
Salesforce-owned integration platform provider MuleSoft has added a new feature called Agent Scanners to Agent Fabric - a suite of capabilities and tools that the company launched last year to rein in the growing challenge of agent sprawl across enterprises. Agent sprawl, often a result of enterprises and their technology teams adopting multiple agentic products, can lead to the fragmentation of agents, turning their workflows redundant or siloed across teams and platforms.
The country's top internet regulator, the Cyberspace Administration of China (CAC), requires that any company launching an AI tool with "public opinion properties or social mobilization capabilities" first file it in a public database: the algorithm registry. In a submission, developers must show how their products avoid 31 categories of risk, from age and gender discrimination to psychological harm to "violating core socialist values."
Elon Musk has launched a $134 billion lawsuit against OpenAI and Microsoft, claiming both companies unjustly profited from his early backing of the artificial intelligence pioneer and abandoned its founding mission. In a federal court filing on Friday, lawyers for Elon Musk said OpenAI gained between $65.5 billion and $109.4 billion as a result of Musk's initial funding, reputation and strategic input after he co-founded the organisation in 2015.