PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
One of the major issues limiting the adoption of machine learning (ML) in applications where accuracy is critical is failure of an otherwise accurate system. Recent work has developed tools to independently measure the competency of machine learning models and the conditions that drive these competency differences. The purpose of this paper is to explore ways to detect and mitigate adversarial attacks by using the same tools to assess competency. We will introduce BAE System’s MindfuL software which assesses ML competency under varying environmental conditions. We then consider a few different types of adversarial attacks and describe detection experiments. We examine the predicted performance and strategy and use that information to detect adversarial attacks. We will present the results of these experiments and discuss the implications of this work and potential future directions for this research.
Matthew Judah,Jen Sierchio, andMichael Planer
"Detection of adversarial attacks on machine learning systems", Proc. SPIE 12538, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications V, 125380D (12 June 2023); https://doi.org/10.1117/12.2664015
ACCESS THE FULL ARTICLE
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The alert did not successfully save. Please try again later.
Matthew Judah, Jen Sierchio, Michael Planer, "Detection of adversarial attacks on machine learning systems," Proc. SPIE 12538, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications V, 125380D (12 June 2023); https://doi.org/10.1117/12.2664015