The biggest decision yet': Jared Kaplan on allowing AI to train itself
Briefly

The biggest decision yet': Jared Kaplan on allowing AI to train itself
"Humanity will have to decide by 2030 whether to take the ultimate risk of letting artificial intelligence systems train themselves to become more powerful, one of the world's leading AI scientists has said. Jared Kaplan, the chief scientist and co-owner of the $180bn (135bn) US startup Anthropic, said a choice was looming about how much autonomy the systems should be given to evolve. The move could trigger a beneficial intelligence explosion or be the moment humans end up losing control."
"Kaplan said that while efforts to align the rapidly advancing technology to human interests had to date been successful, freeing it to recursively self-improve is in some ways the ultimate risk, because it's kind of like letting AI kind of go. The decision could come between 2027 and 2030, he said. If you imagine you create this process where you have an AI that is smarter than you, or about as smart as you, it's [then] making an AI that's much smarter."
By 2030 a pivotal choice will arise about allowing AI systems to autonomously train and recursively self-improve, with potential outcomes ranging from a beneficial intelligence explosion to catastrophic loss of human control. Anthropic is one of several frontier AI firms, alongside OpenAI, Google DeepMind, xAI, Meta and Chinese rivals led by DeepSeek, competing to develop artificial general intelligence. The broadly adopted assistant Claude exemplifies accelerating commercial deployment. The likely window for deciding how much autonomy to grant these systems is between 2027 and 2030. Aligning rapidly advancing AI remains challenging, and unrestricted recursive self-improvement poses an unprecedented risk.
Read at www.theguardian.com
Unable to calculate read time
[
|
]