AI is vulnerable to attack. Can it ever be used safely?
Briefly

Ian Goodfellow and his team showcased how adding noise to an image made a neural network misclassify a panda as a gibbon.
While fears of adversarial attacks on road signs in AI proved unfounded, they highlights the differences in how AI algorithms function compared to human cognition.
Read at Nature
[
|
]