The incessant AI predictions are frightening and incite panic like an ongoing tornado siren from the edge of town. The idea that humans willingly replaced themselves with their technology might give future generations pause. Or maybe not---if those future generations are AI.
Consistent with the general trend of incorporating artificial intelligence into nearly every field, researchers and politicians are increasingly using AI models trained on scientific data to infer answers to scientific questions. But can AI ultimately replace scientists? The Trump administration signed an executive order on Nov. 24, 2025, that announced the Genesis Mission, an initiative to build and train a series of AI agents on federal scientific datasets "to test new hypotheses, automate research workflows, and accelerate scientific breakthroughs."
I am a worrier, and have been for most of my life. At some point, someone dear and smart teased me that I worry about the wrong things. The things that hit me, she noted, were never the things I worried about. For a while that left me feeling like an incompetent worrier-until my research caught up. I realized that the things I worry about often don't end up hurting me precisely because worrying helps me diffuse them ahead of time.
Autonomous agents take the first part of their names very seriously and don't necessarily do what their humans tell them to do - or not to do. But the situation is more complicated than that. Generative (genAI) and agentic systems operate quite differently than other systems - including older AI systems - and humans. That means that how tech users and decision-makers phrase instructions, and where those instructions are placed, can make a major difference in outcomes.
Ever since our ancestors first stood upright and squinted at the horizon, we've been wired to notice patterns. A rustle in the grass might have meant a stalking predator. Dark clouds often meant rain. Those who made these connections and guessed that one thing caused another tended to survive. Over time, this ability to link events became one of our most significant evolutionary advantages. It's how we built tools, tamed fire, and eventually invented Wi-Fi.
For the past three years, the conversation around artificial intelligence has been dominated by a single, anxious question: What will be left for us to do? As large language models began writing code, drafting legal briefs, and composing poetry, the prevailing assumption was that human cognitive labor was being commoditized. We braced for a world where thinking was outsourced to the cloud, rendering our hard-won mental skills, writing, logic, and structural reasoning relics of a pre-automated past.
Each of these achievements would have been a remarkable breakthrough on its own. Solving them all with a single technique is like discovering a master key that unlocks every door at once. Why now? Three pieces converged: algorithms, computing power, and massive amounts of data. We can even put faces to them, because behind each element is a person who took a gamble.
This process, becoming aware of something not working and then changing what you're doing, is the essence of metacognition, or thinking about thinking. It's your brain monitoring its own thinking, recognizing a problem, and controlling or adjusting your approach. In fact, metacognition is fundamental to human intelligence and, until recently, has been understudied in artificial intelligence systems. My colleagues Charles Courchaine, Hefei Qiu, Joshua Iacoboni, and I are working to change that.
Last year, a talented programmer friend of mine decided to give vibe coding a try. Vibe coding is the practice of describing to an AI chatbot what kind of program you want, and letting the AI write it for you. In a matter of minutes you can have new software in front of you, and just start using it. At least, in theory. This is what LLMs (Large Language Models) are supposed to be best at - generating usable software for professional developers