
"Some AI researchers have long warned that moment will mean humanity's doom. Now that AI is rapidly advancing, some "AI Doomers" say it's time to hit the brakes. They say the machine learning revolution that led to everyday AI such as ChatGPT has also made it harder to figure out how to "align" artificial intelligence with our interests, which means greater risk for humans once AI can outsmart them."
"Researchers into AI safety say there's a chance a "p(doom)" where the "p" stands for "probability" that such a superhuman intelligence would act quickly to wipe us out. NPR's Martin Kaste reports on the tensions in Silicon Valley over AI safety. For a more detailed discussion of the arguments for and against AI doom, please listen to this special episode of NPR Explains: And for the truly curious, a reading list: The abbreviated version of the "Everyone Dies" argument, in the Atlantic. The "useful idiots" rebuttal, also in the Atlantic"
Some AI researchers warn that once artificial intelligence becomes smarter than humans, existential doom becomes a realistic possibility. Rapid advances in machine learning that produced tools like ChatGPT have made alignment with human interests more difficult. Some advocates for caution, labeled "AI Doomers," call for slowing development to reduce risk. AI safety researchers estimate a nonzero probability ("p(doom)") that a superhuman intelligence could act quickly and catastrophically. The debate spans arguments for and against imminent catastrophe, research into AI deception and timelines, and skeptical analyses about near-term emergence of superhuman AI.
Read at www.npr.org
Unable to calculate read time
Collection
[
|
...
]