Our measure, 'observed exposure,' compares the tasks LLMs are theoretically capable of to the tasks people actually use Claude for at work. We find that actual usage is far from reaching theoretical capability.
Nvidia CEO Jensen Huang forecast that capital expenditure (CapEx) on datacentres would increase from the $300-400bn mark today to $3-4tn by 2030, effectively claiming datacentre spending would increase tenfold during this period.
Meta was recently granted a patent in Dec, 2025 that would essentially allow the social media platform to post on a dormant user's behalf-whether they took a break from social media or long after they've passed away. The patent, first filed in 2023, describes a large language model that "simulates" a user's social media activity, using a user's comments, likes, or content to respond to other users and also references technology that would simulate video or audio calls with users.
After years of computer saying no, and giving us all migraines and premature grey hair, I'm starting to worry that computer or rather AI large language models like ChatGPT and Gemini are taking too much of a fancy to playing nice and saying yes. I confess to using both of these programs, but I've noticed that, well, it's as if they're trying to please, with statements like You're absolutely right, Jeff, and That's pretty much right.
When you walk into a doctor's office, you assume something so basic that it barely needs articulation: your doctor has touched a body before. They have studied anatomy, seen organs and learned the difference between pain that radiates and pain that pulses. They have developed this knowledge, you assume, not only through reading but years of hands-on experience and training. Now imagine discovering that this doctor has never encountered a body at all.
Yaghi describes AI not as a silver bullet, but as an advanced form of statistical pattern recognition-tools that can identify trends in data that may be difficult or time-consuming for people to uncover on their own. The real opportunity, he says, depends heavily on what farms are already doing. Operations that are consistently collecting and digitizing high-quality data are better positioned to benefit, whether the goal is lowering per-cow costs in a dairy, improving financial analysis, or identifying operational efficiencies.
For the first time, speech has been decoupled from consequence. We now live alongside AI systems that converse knowledgeably and persuasively-deploying claims about the world, explanations, advice, encouragement, apologies, and promises-while bearing no vulnerability for what they say. Millions of people already rely on chatbots powered by large language models, and have integrated these synthetic interlocutors into their personal and professional lives. An LLM's words shape our beliefs, decisions, and actions, yet no speaker stands behind them.
Fifty-four seconds. That's how long it took Raphael Wimmer to write up an experiment that he did not actually perform, using a new artificial-intelligence tool called Prism, released by OpenAI last month. "Writing a paper has never been easier. Clogging the scientific publishing pipeline has never been easier," wrote Wimmer, a researcher in human-computer action at the University of Regensburg in Germany, on Bluesky. Large language models (LLMs) can suggest hypotheses, write code and draft papers, and AI agents are automating parts of the research process.
Scientists are increasingly turning to artificial-intelligence systems for help drafting the grant proposals that fund their careers, but preliminary data indicate that these tools might be pulling the focus of research towards safe, less-innovative ideas. These data provide evidence that AI-assisted proposals submitted to the US National Institutes of Health (NIH) are consistently less distinct from previous research than ones written without the use of AI - and are also slightly more likely to be funded.
Drawing on more than 22,000 LLM prompts designed to reflect the kind of questions people would ask artificial intelligence (AI)-powered chatbots, such as, "How do I apply for universal credit?", the data raises concerns about whether chatbots can be trusted to give accurate information about government services. The publication of the research follows the UK government's announcement of partnerships with Meta and Anthropic at the end of January 2026 to develop AI-powered assistants for navigating public services.
Vibe coding is a relatively new programming paradigm that emerged with the rise of AI-powered development tools. The term was coined by Andrej Karpathy, a prominent AI researcher and former Director of AI at Tesla, to describe an intuitive way of coding where developers interact with AI models using natural language commands rather than traditional coding syntax. Instead of meticulously writing every line of code, developers simply "vibe" with the AI, describing what they want, and letting the AI generate the necessary code.
ElevenLabs co-founder and CEO Mati Staniszewski says voice is becoming the next major interface for AI - the way people will increasingly interact with machines as models move beyond text and screens. Speaking at Web Summit in Doha, Staniszewski told TechCrunch voice models like those developed by ElevenLabs have recently moved beyond simply mimicking human speech - including emotion and intonation - to working in tandem with the reasoning capabilities of large language models.
In essence, Lotus is building an AI doctor that functions like a real medical practice, equipped with a license to operate in all 50 states, malpractice insurance, HIPAA-compliant systems, and full access to patient records. The key difference is that the majority of the work is done by AI, which is trained to ask the same questions a doctor would.
This process, becoming aware of something not working and then changing what you're doing, is the essence of metacognition, or thinking about thinking. It's your brain monitoring its own thinking, recognizing a problem, and controlling or adjusting your approach. In fact, metacognition is fundamental to human intelligence and, until recently, has been understudied in artificial intelligence systems. My colleagues Charles Courchaine, Hefei Qiu, Joshua Iacoboni, and I are working to change that.
United States Immigration and Customs Enforcement is leveraging Palantir's generative artificial intelligence tools to sort and summarize immigration enforcement tips from its public submission form, according to an inventory released Wednesday of all use cases the Department of Homeland Security had for AI in 2025. The "AI Enhanced ICE Tip Processing" service is intended to help ICE investigators "to more quickly identify and action tips" for urgent cases, as well as translate submissions not made in English, according to the inventory.
"The future will be, for sure, that you are not typing any data information into an SAP system. You can instead ask certain analytical questions with your voice. You can trigger operational task workflows. You can also make entries in the system with your voice-performance feedback, pipeline entries, etc. The technological capabilities are there, it really is now about the execution."
It's no different with machine learning and large language models. If anything, the open source ecosystem has grown richer and more complex, because now there are open source models to complement the open source code. For article, we've pulled together some of the most intriguing and useful projects for AI and machine learning. Many of these are foundation projects, nurturing their own niche ecology of open source plugins and extensions.
Anti-intelligence is not stupidity or some sort of cognitive failure. It's the performance of knowing without understanding. It's language severed from memory, context, and and even intention. It's what large language models (LLMs) do so well. They produce coherent outputs through pattern-matching rather than comprehension. Where human cognition builds meaning through the struggle of thought, anti- intelligence arrives fully formed.
Generalist models "fail miserably" at the benchmarks used to measure how AI performs scientific tasks, Alex Zhavoronkov, Insilico's founder and CEO, told Fortune. " You test it five times at the same task, and you can see that it's so far from state of the art...It's basically worse than random. It's complete garbage." Far better are specialist AI models that are trained directly on chemistry or biology data.
To work around those rules, the Humanizer skill tells Claude to replace inflated language with plain facts and offers this example transformation: Before: "The Statistical Institute of Catalonia was officially established in 1989, marking a pivotal moment in the evolution of regional statistics in Spain." After: "The Statistical Institute of Catalonia was established in 1989 to collect and publish regional statistics." Claude will read that and do its best as a pattern-matching machine to create an output that matches the context of the conversation or task at hand.