Google DeepMind releases its plan to keep AGI from running wild
Briefly

The article discusses the potential development of artificial general intelligence (AGI) and the risks it poses to humanity. Researchers at DeepMind predict AGI could potentially be realized by 2030. They outline four major risks associated with AGI: misuse, misalignment, mistakes, and structural risks. The paper advocates for rigorous testing and enhanced safety measures, proposing 'unlearning' as a method to suppress harmful capabilities. The necessity for proactive strategies is emphasized amid the growing AI hype, highlighting a significant gap in ethical frameworks akin to Asimov's guidelines.
The DeepMind team identified four types of AGI risk: misuse, misalignment, mistakes, and structural risks, emphasizing a structured approach to mitigate potential harms.
DeepMind's researchers project that artificial general intelligence could be achievable by 2030 and stress the need for comprehensive safety protocols to prevent AGI-related risks.
Read at Ars Technica
[
|
]