PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Deep Convolutional Neural Networks (DCNN) have proven to be an exceptional tool for object recognition in various computer vision applications. However, recent findings have shown that such state of the art models can be easily deceived by inserting slight imperceptible perturbations to key pixels in the input image. In this paper, we focus on deceiving Automatic Target Recognition(ATR) classiers. These classiers are built to recognize specified targets in a scene and also simultaneously identify their class types. In our work, we explore the vulnerabilities of DCNN-based target classifiers. We demonstrate significant progress in developing infrared adversarial target by adding small perturbations to the input image such that the image perturbation cannot be easily detected. The algorithm is built to adapt to both targeted and non-targeted adversarial attacks. Our findings reveal promising results that reflect serious implications of adversarial attacks.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The alert did not successfully save. Please try again later.
Uche M. Osahor, Nasser M. Nasrabadi, "Design of adversarial targets: fooling deep ATR systems," Proc. SPIE 10988, Automatic Target Recognition XXIX, 109880F (14 May 2019); https://doi.org/10.1117/12.2518945