Loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity. With the rapid pace of AI development, these "catastrophic outcomes" could come any day.
Governments need to collaborate on AI safety precautions. Some of the scientists' ideas included encouraging countries to develop specific AI authorities that respond to AI "incidents" and risks within their borders.
This body would ensure states adopt and implement a minimal set of effective safety preparedness measures, including model registration, disclosure, and tripwires.
Developers would vow not to create AI, "that can autonomously replicate, improve, seek power or deceive their creators, or those that enable building weapons of mass destruction and conducting cyberattacks."
Collection
[
|
...
]