
""When you have a child," Sutton told the WSJ, "would you want a button that if they do the wrong thing, you can turn them off? That's much of the discussion about AI. It's just assumed we want to be able to control them." Sutton argued that there's nothing "sacred" about humankind and our squishy, biological form of life. Most species go extinct, and humans are no exception."
""I was used to thinking of AI leaders and researchers in terms of two camps: on one hand, optimists who believed it's no problem to 'align' AI models with human interests," Price wrote, "and on the other, doomers who wanted to call a time-out before wayward superintelligent AIs exterminate us." "Now here was this third type of person, asking, what's the big deal, anyway?""
A group of influential technologists embrace a perspective that accepts the emergence of advanced artificial life even if it surpasses or replaces humans. This camp, labeled "cheerful apocalyptics," contrasts with optimists who trust alignment and doomers who call for pauses. Richard Sutton exemplifies the view by comparing AI creation to having children and questioning the necessity of an off switch. Sutton argues that humans are not inherently sacred, notes that most species go extinct, and suggests that being replaced could be acceptable if it leads to a better universe. The stance raises alignment and control concerns.
Read at Futurism
Unable to calculate read time
Collection
[
|
...
]