PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This is far more than just prompt injection and LLM jailbreaks. Apply a systematic lifecycle perspective to frame adversarial testing requirements and tactics for artificial intelligence (AI) systems. In this talk, we’ll explore why red teaming AI-enabled systems has some nuance that might not be covered during traditional testing and evaluation. We’ll discuss how to build AI red teaming capabilities, measure their performance and effectiveness, and explore the future of adversarial testing.
Joe Lucas
"AI red teaming", Proc. SPIE 13054, Assurance and Security for AI-enabled Systems, 130540H (10 June 2024); https://doi.org/10.1117/12.3029883
ACCESS THE FULL ARTICLE
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The alert did not successfully save. Please try again later.
Joe Lucas, "AI red teaming," Proc. SPIE 13054, Assurance and Security for AI-enabled Systems, 130540H (10 June 2024); https://doi.org/10.1117/12.3029883