Sutskever predicts that superintelligent AI, which surpasses human capabilities, will emerge eventually and will be qualitatively different from current AI technologies.
He emphasizes that future superintelligent systems will possess genuine agency, allowing them to reason and become unpredictable, understanding their environment with minimal data.
Sutskever envisions a scenario where superintelligent AIs might advocate for rights, suggesting a coexistence with humans could lead to a positive outcome.
Following his tenure at OpenAI, Sutskever founded Safe Superintelligence, a lab dedicated to AI safety, with significant funding raised in 2023.
Collection
[
|
...
]