Recent studies have demonstrated that deep neural networks can be attacked by adding small pixel-level perturbations to the input data. In general, such disturbances are indistinguishable to the human eye, but can completely subvert the output of the deep neural network classifier to achieve non-target or target attacks. The current common practice is to superimpose the original image after generating a disturbance for the neural network. In this paper, we applied a method of generating target images directly using GAN to achieve a method of attacking deep neural networks. This method has excellent results on black-box attacks and is also suitable for the preconditions of most neural network attacks. Using this method, we achieved an 82% success rate on the black-box target attack on the cifar10 dataset and the MNIST dataset, while ensuring that the generated image is comparable to the original image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.