A new lab and a new paper reignite an old AI debateThe departure of Ilya Sutskever from Open AI and the launch of Safe Superintelligence (SSI) indicate a desire to focus on building superhuman AI.
OpenAI's former chief scientist just raised $1bn for a new firm aimed at developing responsible AIIlya Sutskever raises $1 billion to establish Safe Superintelligence, focusing on the development of safe AI systems following his exit from OpenAI.
Ilya Sutskever's SSI startup raises $1BSSI raised over $1 billion in funding to pursue developing safe superintelligence, underlining ongoing strong interest in AI despite speculations of an AI bubble burst.
A new lab and a new paper reignite an old AI debateThe departure of Ilya Sutskever from Open AI and the launch of Safe Superintelligence (SSI) indicate a desire to focus on building superhuman AI.
OpenAI's former chief scientist just raised $1bn for a new firm aimed at developing responsible AIIlya Sutskever raises $1 billion to establish Safe Superintelligence, focusing on the development of safe AI systems following his exit from OpenAI.
Ilya Sutskever's SSI startup raises $1BSSI raised over $1 billion in funding to pursue developing safe superintelligence, underlining ongoing strong interest in AI despite speculations of an AI bubble burst.
Exclusive: OpenAI co-founder Sutskever's new safety-focused AI startup SSI raises $1 billionSSI has raised $1 billion to develop safe superintelligence AI, focusing on talent acquisition and computing power.
AI-powered martech news and releases: June 20 | MarTechSafe Superintelligence aims to create AI that won't harm humanity, facing skepticism from experts like Chris Penn.
Ilya Sutskever, OpenAI's former chief scientist, launches new AI company | TechCrunchSafe Superintelligence Inc. (SSI) is established to address the development of safe superintelligent AI systems, co-founded by ex-OpenAI's Ilya Sutskever.