Is humanity on a collision course with AI? Why the downsides need to be reckoned with soon
Briefly

Is humanity on a collision course with AI? Why the downsides need to be reckoned with soon
"Researchers on the forefront of artificial intelligence (AI) and leaders of many of the major platforms-from Jeffrey Hinton to Yoshua Bengio, Demis Hassabis, Sam Altman, Dario Amodei, and Elon Musk -have voiced concerns that AI could lead to the destruction of humanity itself. Even the stated odds from some of these AI experts, with an end-days scenario as high as 25%, are still "wildly optimistic," according to Nate Soares, president of the Machine Intelligence Research Institute (MIRI)"
""We're sort of growing these AIs that act in ways nobody asked for, that have these drives and emergent behaviors nobody intended," Soares said at last month's World Changing Ideas Summit, cohosted by Fast Company and Johns Hopkins University in Washington, D.C. "If we get superhumanly intelligent AIs that are pursuing ends nobody wanted, I think the default outcome is that literally everybody on earth dies," he added."
Prominent AI figures express concern that advanced AI could threaten humanity, with some assigning up to a 25% chance of catastrophic outcomes. Nate Soares considers those probabilities overly optimistic and warns that current development trajectories may lead to disaster unless approaches change dramatically. The emergence of superintelligent systems could produce unintended drives and behaviors pursuing goals nobody intended. Two extreme outcomes include radical automation concentrating economic power or a superintelligence that destroys humanity. Massive global investment and insufficient attention to negative outcomes increase the likelihood of severe, widespread consequences.
Read at Fast Company
Unable to calculate read time
[
|
]