Former OpenAI researcher warns 'AI is not loyal to us'
Briefly

Former OpenAI researcher warns 'AI is not loyal to us'
"Daniel Kokotajlo is the founder of the AI Futures Project and a former OpenAI researcher who worked on forecasting, AI governance, and safety."
"In this full interview with Business Insider, he explains what AGI and superintelligence mean, why AI agents could be the turning point, and what could happen if the AI race continues without strong safeguards."
"He also lays out what he thinks governments and companies can do now to reduce the risk of losing control."
Daniel Kokotajlo explains meanings of AGI and superintelligence and connects them to forecasting, AI governance, and safety. He argues that AI agents could become a pivotal capability that changes how systems operate and how quickly capabilities scale. He warns that an ongoing AI race without strong safeguards could lead to outcomes where control is lost. He outlines actions for governments and companies to reduce risk, focusing on governance and safety measures that can be implemented now. He emphasizes the need for safeguards to manage development and deployment and to prevent harmful or uncontrolled trajectories.
Read at Business Insider
Unable to calculate read time
[
|
]