Geoffrey Hinton, Richard Branson, and Prince Harry join call to for AI labs to halt their pursuit of superintelligence | Fortune
Briefly

Geoffrey Hinton, Richard Branson, and Prince Harry join call to for AI labs to halt their pursuit of superintelligence | Fortune
"A new open letter, signed by a range of AI scientists, celebrities, policymakers, and faith leaders, calls for a ban on the development of 'superintelligence'-a hypothetical AI technology that could exceed the intelligence of all of humanity-until the technology is reliably safe and controllable. The letter's more notable signatories include AI pioneer and Nobel laureate Geoffrey Hinton, other AI luminaries such as Yoshua Bengio and Stuart Russell, as well as business leaders such as Virgin founder Richard Branson and Apple co-founder Steve Wozniak."
"New polling conducted alongside the open letter, which was written and circulated by the non-profit Future of Life Institute, found that the public generally agreed with the call for a moratorium on the development of superpowerful AI technology.In the U.S., the polling found that only 5% of U.S. adults support the current status quo of unregulated development of advanced AI, while 64% agreed superintelligence shouldn't be developed until it's provably safe and controllable. The poll found that 73% want robust regulation on advanced AI."
""95% of Americans don't want a race to superintelligence, and experts want to ban it," Future of Life President Max Tegmark said in the statement. Superintelligence is broadly defined as a type of artificial intelligence capable of outperforming the entirety of humanity at most cognitive tasks. There is currently no consensus on when or if superintelligence will be achieved, and timelines suggested by experts are speculative."
More than 1,000 signatories including AI pioneers Geoffrey Hinton, Yoshua Bengio, Stuart Russell, business leaders Richard Branson and Steve Wozniak, celebrities Joseph Gordon-Levitt, will.i.am, Prince Harry and Meghan, and figures such as Steve Bannon and Mike Mullen call for a moratorium on development of superintelligence until the technology is provably safe and controllable. Polling found 64% of U.S. adults agree superintelligence shouldn't be developed until it's provably safe, 73% want robust regulation, and only 5% support unregulated development. Superintelligence is defined as AI capable of outperforming humanity across most cognitive tasks; timelines remain speculative.
Read at Fortune
Unable to calculate read time
[
|
]