Presentation + Paper
7 June 2024 Dynamic reinforcement learning for network defense: botnet detection and eradication
Robert M. Schabinger, Caleb Carlin, Jonathan Mullin, David A. Bierbrauer, Emily A. Nack, John A. Pavlik, Alexander V. Wei, Nathaniel D. Bastian, Metin B. Ahiskali
Author Affiliations +
Abstract
In this work, we demonstrate the potential of dynamic reinforcement learning (RL) methods to revolutionize cybersecurity. The RL framework we develop is shown to be capable of shutting down an aggressive botnet, which initially uses spear phishing to establish itself in a Department of Defense (DoD) network. To ensure a suitable real-time response, we employ CP, a transformer model trained for network anomaly detection, to factorize the state space accessible to our RL agent. As the fidelity of our cyber scenario is of the utmost importance for meaningful RL training, we leverage the CyberVAN emulation environment to model an appropriate DoD enterprise network to attack and defend. Our work represents an important step towards harnessing the power of RL to automate general and fully-realistic Defensive Cyber Operations (DCOs).
Conference Presentation
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Robert M. Schabinger, Caleb Carlin, Jonathan Mullin, David A. Bierbrauer, Emily A. Nack, John A. Pavlik, Alexander V. Wei, Nathaniel D. Bastian, and Metin B. Ahiskali "Dynamic reinforcement learning for network defense: botnet detection and eradication", Proc. SPIE 13051, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications VI, 130511D (7 June 2024); https://doi.org/10.1117/12.3012783
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Network security

Computer networks

Defense and security

Machine learning

Inspection

Monte Carlo methods

Data modeling

Back to Top