Neural networks are vulnerable to various adversarial perturbations added to the input. Highly sparse adversarial perturbations are difficult to identify, which is especially dangerous to network security. Previous research has shown that ℓ0-norm has good sparsity but is challenging to solve. We use ℓq-norm to approach ℓ0-norm and propose a new white-box algorithm to generate adversarial examples aiming at minimizing ℓq distance of the original image. Meanwhile, we extend the adversarial attack to facial anti-spoofing task in the field of face recognition security. This extension enables us to generate sparse and unobservable facial attack perturbation. To increase the diversity of the data set, we make a new data set of real and fake facial images containing images produced by various latest spoofing methods. Extensive experiments show that our proposed method can effectively generate a sparse perturbation and successfully mislead the classifier in multi-classification tasks and facial anti-spoofing tasks. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
CITATIONS
Cited by 2 scholarly publications.
Neural networks
Computer security
Facial recognition systems
Detection and tracking algorithms
RGB color model
Data modeling
Eye