Exclusive: Some AI dangers are already real, DeepMind's Hassabis says
Briefly

Exclusive: Some AI dangers are already real, DeepMind's Hassabis says
"The big picture: Hassabis in May had predicted AI that meets or exceeds human capabilities - artificial general intelligence, or AGI - could come by 2030. What they're saying: In an interview with Axios' Mike Allen, Hassabis assessed the risk from a number of "catastrophic outcomes" of AI misuse as the technology develops, particularly "energy or water cyberterror." "That's probably almost already happening now, I would say, maybe not with very sophisticated AI yet, but I think that's the most obvious vulnerable vector," he said."
""That's probably almost already happening now, I would say, maybe not with very sophisticated AI yet, but I think that's the most obvious vulnerable vector," he said. That's one reason, Hassabis added, that Google is so heavily focused on cybersecurity, to defend against such threats. The intrigue: AI experts talk often of a concept known as "p(doom)," or the probability of catastrophe happening due to AI. Hassabis said his own assessment of p(doom) was "non-zero." "It's worth very seriously considering and mitigating against," he said."
AGI could arrive by 2030. AI misuse carries risk of catastrophic outcomes, with energy or water cyberterror identified as a particularly vulnerable vector. Such infrastructure attacks may already be occurring, potentially without highly sophisticated AI. Concern about these risks drives heavy investment in cybersecurity to protect critical systems. Experts quantify catastrophic risk using the concept p(doom), the probability that AI causes a catastrophe. The assessed p(doom) is non-zero. That non-zero probability warrants serious consideration and active mitigation to reduce the chance of large-scale harm to infrastructure and society.
Read at Axios
Unable to calculate read time
[
|
]