Presentation + Paper
12 June 2023 Detection of adversarial attacks on machine learning systems
Matthew Judah, Jen Sierchio, Michael Planer
Author Affiliations +
Abstract
One of the major issues limiting the adoption of machine learning (ML) in applications where accuracy is critical is failure of an otherwise accurate system. Recent work has developed tools to independently measure the competency of machine learning models and the conditions that drive these competency differences. The purpose of this paper is to explore ways to detect and mitigate adversarial attacks by using the same tools to assess competency. We will introduce BAE System’s MindfuL software which assesses ML competency under varying environmental conditions. We then consider a few different types of adversarial attacks and describe detection experiments. We examine the predicted performance and strategy and use that information to detect adversarial attacks. We will present the results of these experiments and discuss the implications of this work and potential future directions for this research.
Conference Presentation
© (2023) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Matthew Judah, Jen Sierchio, and Michael Planer "Detection of adversarial attacks on machine learning systems", Proc. SPIE 12538, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications V, 125380D (12 June 2023); https://doi.org/10.1117/12.2664015
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Data modeling

Machine learning

Image classification

Neural networks

Rutherfordium

Adversarial training

Computer simulations

Back to Top