Deep learning has revolutionized the performance of many computer vision systems in recent years. In particular deep convolutional neural networks have demonstrated ground-breaking performance in object classification from imagery. However, these techniques typically require sizeable volumes of training data in order to derive the large number of parameters that these types of approaches must learn. In many situations sufficient volumes of imagery for the object types of interest are unavailable. One solution to this problem is to use an initial training set which has properties similar to those of the objects of interest, but is available with a large number of labelled examples. These can be used for initial network training and the resulting partially-learned solution may subsequently be tuned using a smaller sample of the actual target objects. This type of approach, transfer learning, has shown considerable success in conventional imaging domains. Unfortunately, for Synthetic Aperture Radar imaging sensors, large volumes of labelled training samples of any type are hard to come by. The challenge is exacerbated when variations in imaging geometry and sensor configuration are taken into account. This paper examines the use of simulated SAR imagery in pre-training a deep neural network. The simulated imagery is generated using a straightforward process which has the capability to generate sufficient volumes of training exemplars in a modest amount of time. The samples generated are used to train a deep neural network which is then retrained using a comparatively small volume of MSTAR SAR imagery. The value of such a pre-training process is assessed using techniques to explain model performance by visualization. The assessment highlights some interesting aspects of the MSTAR SAR image set with regard to bias.
|