Sensors used in intelligence, surveillance and reconnaissance (ISR) operations and activities have the ability to generate vast amounts of data. High-volume analytical capabilities are needed to process data from multi-modal sensors to develop and test complex computational and deep learning models in support of the U.S. Army Multi-Domain Operations (MDO). The Army Research Laboratory designs, develops and tests Artificial Intelligence and Machine Learning (AI/ML) algorithms employing large repositories of in-house data. To efficiently process the data as well as design, build, train and deploy models, parallel and distributed algorithms are needed. Deep learning frameworks provide language-specific, container-based building blocks associated with deep learning neural networks applied to specific target applications. This paper discusses applications of AI/ML deep learning frameworks and Software Development Kits (SDKs) and demonstrates and compares specific multi-core processor and NVidia Graphics Processing Unit (GPU) implementations for desktop and Cloud environments. Frameworks selected for this research include PyTorch and Matlab. Amazon Web Services (AWS) SageMaker was used to launch Machine Learning instances ranging from general purpose computing to GPU instances. Detailed processes, example code, performance enhancements, best practices and lessons learned are included for publicly available acoustic and image datasets. Research results indicate parallel implementations of data preprocessing steps saved significant time but more expensive GPUs did not provide any processing time advantages for the machine learning algorithms tested.
|