Chaos in the Cradle of A.I.
Briefly

In the sci-fi world of "Terminator 2," it's crystal clear what it means for an A.I. to become "self-aware," or to pose a danger to humanity; it's equally obvious what might be done to stop it. But in real life, the thousands of scientists who have spent their lives working on A.I. disagree about whether today's systems think, or could become capable of it; they're uncertain about what sorts of regulations or scientific advances could let the technology flourish while also preventing it from becoming dangerous.
OpenAI, the research organization behind ChatGPT, has long represented that middle-of-the-road position. It was founded in 2015, as a nonprofit, with big investments from Peter Thiel and Elon Musk, who were (and are) concerned about the risks A.I. poses. OpenAI's goal, as stated in its charter, has been to ensure that A.I. benefits everyone and to avoid any concentration of power.
Read at The New Yorker
[
add
]
[
|
|
]