Poster + Paper
3 July 2024 Practical considerations for adversarial training
Roger Sengphanith, Diego Marez, Shibin Parameswaran
Author Affiliations +
Conference Poster
Abstract
Susceptibility to adversarial attacks is an issue that plagues many deep neural networks. One method to protect against these attacks is adversarial training (AT), which injects adversarially modified examples into training data, in order to achieve adversarial robustness. By exposing the model to malignant data during training, the model learns to not get fooled by them during inference time. Although, AT is accepted to be the de facto defense against adversarial attacks, questions still remain when using it for practical applications. In this work, we address some of these questions: What ratio of original-to-adversarial in the training set is needed to make them effective?, Does model robustness from one type of AT generalize to another attack?, and Does the AT data ratio and generalization vary depending on model complexity? We attempt to answer these questions using carefully crafted experiments using CIFAR10 dataset and ResNet models with varying complexity.
© (2024) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Roger Sengphanith, Diego Marez, and Shibin Parameswaran "Practical considerations for adversarial training", Proc. SPIE 13054, Assurance and Security for AI-enabled Systems, 130540S (3 July 2024); https://doi.org/10.1117/12.3023037
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Data modeling

Tumor growth modeling

Education and training

Adversarial training

Performance modeling

Visual process modeling

Defense and security

Back to Top