Convolutional Neural Network (CNN) has established as an effective deep learning model for hyperspectral image classification by considering both spectral and spatial information. In this study, the performance of two-dimensional (2D) CNN architecture is evaluated at hyperspectral and multispectral resolution. Two types of multispectral data are analyzed viz., original and transformed multispectral data. Hyperspectral bands are transformed to spectral resolution of multispectral bands by averaging the reflectances of specific hyperspectral narrow bands which are falling within the spectral ranges of multispectral bands. The well-known Pavia University dataset and a new dataset of Pear orchard are investigated in this study. In case of Pear orchard dataset, classification is performed with both types of multispectral data. All the experiments are carried out with the same 2D CNN architecture. In case of Pavia University dataset, hyperspectral and transformed multispectral data achieve OA(%) of 94.29±1.28 and 94.27±2.01 respectively considering 20% samples as training. In case of Pear orchard dataset, hyperspectral, multispectral and transformed multispectral data achieve OA(%) of 91.59±0.89, 88.65±1.35, and 93.24±0.16 respectively considering 20% samples as training. It is evident that transformed multispectral data, which comprises of inherent hyperspectral information, provides similar or better performance compared to hyperspectral data. Further, with the use of 3D CNN architecture, classification performance improves in case of Pavia University dataset, whereas it remains statistically similar in case of Pear orchard dataset. The present promising results illustrates the performance of CNN even in small dataset which is comparable to several published state-of-the art results on the same dataset.
Water region estimation is considered as one of the fundamental classification tasks in remote sensing. Several previous research works focused on traditional practices such as spectral analysis, and statistical approaches for water region estimation. However, producing a consistent global scale water estimation results are still considered as relatively challenging task. On the other hand, in computer vision applications Convolutional Neural Network (CNN) emerged as greater tool for classification tasks. Recently, Recurrent Convolutional Neural Network(R-CNN) proposed for improved classification results. Therefore, inspired from R-CNN, this research proposes a Recurrent feedback Encoder-Decoder without max-pooling for global scale water region estimation using temporal Landsat-8 images. The proposed R-CNN uses three Landsat-8 images which consist of current observation (t0) to predict water region and two previous observation of the same location (t 1, t 2), and these three temporal observation of the same location were employed for training with the ground truth labelled data (water/non-water) from the current observation. Proposed R-CNN model uses temporal input data and results in multi-temporal output for water region estimation. Experiments show promising results especially while using concatenated recurrent feedback features. The model significantly outperforms baseline model and UNet (without recurrent and feedback structure). Detailed comparison study on temporal Landsat-8 images that highly affected by sunglint, cloud and other atmospheric conditions shows that the proposed model has a potential to produce reliable water region estimation where UNet, baseline model R-CNN single model fail.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.