Presentation + Paper
7 June 2024 Learning behavior of offline reinforcement learning agents
Author Affiliations +
Abstract
Reinforcement learning (RL) agents offer significant value for military applications by effectively navigating complex, dynamic environments typical of mission engineering and operational analysis. Once trained, these agents can be employed to inform mission planners on optimal strategies, tactics, or even innovative utilization of different military platforms within a given scenario. In recent years, RL has become a major research area for automation and solving complex sequential decision-making problems. However, a notable challenge lies in the inherent black-box nature of RL models and their inability to explain their decisions and actions. This limitation serves as a major adoption barrier, especially in Defense. This paper aims to study EXplainable RL (XRL) within an operational context. XRL is a distinct branch of Explainable Artificial Intelligence (XAI) techniques that provides the necessary transparency to make AI models more transparent to address this challenge. This research is an effort to gain insight into the behavior of RL agents in an operational environment and to discuss explainability and interpretability through the lens of different roles within the decision-making pipeline.
Conference Presentation
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Indu Shukla, Haley R. Dozier, and Althea C. Henslee "Learning behavior of offline reinforcement learning agents", Proc. SPIE 13051, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications VI, 130510N (7 June 2024); https://doi.org/10.1117/12.3014099
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Machine learning

Decision making

Data modeling

Artificial intelligence

Control systems

Defense and security

Simulations

Back to Top