
"A statement published Wednesday by the Future of Life Institute (FLI), a nonprofit organization focused on existential AI risk, argues that the development of "superintelligence" -- an AI industry buzzword that usually refers to a hypothetical machine intelligence that can outperform humans on any cognitive task -- presents an existential risk and should therefore be halted until a safe pathway forward can be established."
"The unregulated competition among leading AI labs to build superintelligence could result in "human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction," the authors of the statement wrote. They go on to argue that a prohibition on the development of superintelligent machines could be enacted until there is (1) "broad scientific consensus that it will be done safely and controllably," as well as (2) "strong public buy-in.""
ChatGPT's surprise release nearly three years ago triggered a rapidly accelerating AI race. The Future of Life Institute, a nonprofit focused on existential AI risk, warns that development of superintelligence poses existential danger and should be halted until a safe pathway exists. Superintelligence is described as a hypothetical machine intelligence that can outperform humans on any cognitive task. Unregulated competition among AI labs could cause economic obsolescence, loss of freedoms, civil liberties erosion, national security threats, and potential human extinction. The group calls for a moratorium until broad scientific consensus and strong public buy-in are achieved, and the petition garnered over 1,300 signatures, including Geoffrey Hinton and Yoshua Bengio.
Read at ZDNET
Unable to calculate read time
Collection
[
|
...
]