AI networks are more vulnerable to malicious attacks than previously thought
Briefly

"What's more, we found that attackers can take advantage of these vulnerabilities to force the AI to interpret the data to be whatever they want," Wu says. "Using the stop sign example, you could make the AI system think the stop sign is a mailbox, or a speed limit sign, or a green light, and so on, simply by using slightly different stickers -- or whatever the vulnerability is.
Read at ScienceDaily
[
add
]
[
|
|
]