Max Tegmark, an AI safety advocate, suggests that AI companies should replicate the safety calculations used prior to the Trinity nuclear test before developing advanced systems. He highlighted a 90% probability of existential threats posed by highly advanced AI. Tegmark calls for a rigorous assessment of the Compton constant, predicting the likelihood of losing control of AI, to establish a consensus for global safety protocols. This call echoes historical scientific rigor in risk assessment, urging AI developers to take serious responsibility for potential catastrophic outcomes.
"AI companies should replicate the safety calculations from the Trinity nuclear test to evaluate the risks of advanced AI systems, particularly the Compton constant for control loss."
"Max Tegmark emphasizes that companies must rigorously calculate the probabilities of losing control over Artificial Super Intelligence to ensure comprehensive safety before deployment."
Collection
[
|
...
]