
"Back in the day, people would always say: Can't we keep superintelligence under control? Because we'll put it inside a box that's not connected to the internet and we won't let it affect the real world at all, unless we're very sure it's nice... But in real life, what everybody does is immediately connect AI to the internet. They train it on the internet. Before it's even been tested to see how powerful it is, it is already connected to the internet, being trained."
"For Eliezer Yudkowsky, the day OpenAI launched, the world ended. "That was the day I realized that humanity probably wasn't going to survive this," he said on a recent podcast with Ezra Klein. For the uninitiated, Yudkowsky is no fringe voice. He founded the Machine Intelligence Research Institute. He helped define the "alignment problem" - how to make sure superintelligent systems share human values."
Eliezer Yudkowsky perceives the OpenAI launch as a decisive existential turning point. He founded the Machine Intelligence Research Institute and helped define the alignment problem: ensuring superintelligent systems share human values. Proposed safety measures like isolating AIs are routinely abandoned in practice as teams connect and train models on the internet before fully testing capabilities. Economic incentives favor more capable, less constrained models because they are more profitable. Companies therefore prioritize harder problems and broadly capable systems. This convergence of profit motive and deployment practices raises severe safety, governance, and long-term risk concerns for humanity.
Read at Big Think
Unable to calculate read time
Collection
[
|
...
]