A report by Apollo Group highlights emerging risks from AI firms like OpenAI and Google, suggesting that automating research and development could lead to accelerated capabilities. This internal acceleration may enable significant power accumulation without oversight, potentially disrupting democratic institutions. While public scrutiny has allowed for regulatory discussions, the authors warn that advancements hidden from the public eye, termed 'intelligence explosions,' could pose severe risks to civilization and democratic order.
The ability for AI to accelerate R&D might enable firms to circumvent guardrails, increasing potential for societal harm.
The public visibility of AI's progress allows for preparedness against risks, but internal automation could lead to uncontrolled advancements.
Collection
[
|
...
]