Detecting Adversarial Attacks with Subset Scanning
Briefly

Deep neural networks are susceptible to adversarial perturbations of their input data that can cause a sample to be incorrectly classified.These perturbations contain small variations in the pixel space that cannot be detected by a human but can change the output of a classifier.Reliably detecting attacks in a given set of inputs is of high practical relevance due to the vulnerability of neural networks to adversarial examples.
Read at Medium
[
add
]
[
|
|
]