Safe Superintelligence (SSI), co-founded by former OpenAI chief scientist Ilya Sutskever, is currently in talks to secure funding at a valuation of at least $20 billion. This represents a significant increase from the $5 billion valuation following a $1 billion funding round last year. SSI's mission focuses solely on developing superintelligent AI with built-in safety measures. Sutskever has indicated the need for new research approaches due to limitations in current AI data accumulation and quality.
Safe Superintelligence is seeking funding at a valuation of least $20 billion, a stark increase from last year's $5 billion, highlighting investor confidence in its ambitious AI goals.
The company aims to develop superintelligent AI models with built-in security measures to prevent malicious outputs, prioritizing safety alongside technological advancement.
Ilya Sutskever emphasized that current methods for developing AI may have hit their limits due to insufficient high-quality training data, prompting a shift to new research directions.
Founded by notable figures like Ilya Sutskever, SSI's exclusive focus on superintelligence implies a long-term vision, as they are not generating immediate revenue.
Collection
[
|
...
]