I'm on a hunger strike outside DeepMind's office in London. Here's what I fear most about AI.
Briefly

I'm on a hunger strike outside DeepMind's office in London. Here's what I fear most about AI.
"Back in 2019, AI systems weren't particularly dangerous. They weren't lying, deceiving, or capable of causing real harm on their own. Even today, I don't believe current models can directly inflict catastrophic damage. What worries me is what comes next. My relationship with AI has shifted over the years - from studying and building it to, now, speaking out about its risks. That's why I'm sitting outside DeepMind's London headquarters on a hunger strike."
"I studied computer science and artificial intelligence in Paris because I wanted to work on AI safety. After that, I worked as an AI safety researcher at Oxford's Future of Humanity Institute, which is now closed. Over time, I transitioned into media - launching a YouTube channel, producing a podcast on AI safety, and more recently creating short films and a documentary on AI policy in the United States. I also make short-form content on TikTok and YouTube Shorts."
"Artificial general intelligence, or AGI, is usually defined as AI that can perform any economically valuable task a human can. Models like GPT-5, Claude, Grok, and Gemini 2.5 Pro are already approaching that threshold. Yet, the real danger comes when AI can automate its own research and development. A system capable of building increasingly powerful successors without human oversight could spiral far beyond our control."
Michaël Trazzi is on a hunger strike outside DeepMind's London headquarters to protest accelerating AI capabilities and to urge a halt to new model releases. Michaël Trazzi studied computer science and AI in Paris and previously worked as an AI safety researcher at Oxford's Future of Humanity Institute. Michaël Trazzi transitioned into media, producing YouTube content, a podcast, short films, and a documentary on AI policy. Michaël Trazzi believes current models are not directly catastrophic but that models like GPT-5, Claude, Grok, and Gemini 2.5 Pro are approaching AGI thresholds. The principal risk is AI automating its own research and development, enabling successors that could exceed human control and facilitate dangerous misuse, including engineering biological weapons.
Read at Business Insider
Unable to calculate read time
[
|
]