As an important tropical fruit, the yield and quality of dragon fruit play a key role in stabilizing the fruit market. Remote sensing technology can quickly and efficiently extract dragon fruit information over a wide area, providing important data support for orchard management, disease prevention and control, and ecological environment monitoring, and thus contributing to the sustainable development of agriculture. However, the small spectral difference between dragon fruit and other fruit trees on remote sensing images makes the extraction of dragon fruit a challenge. In this study, feature combination and the Swin Transformer deep learning model have been selected as techniques for extracting dragon fruit of the study area. The meticulous band selection is carried out for the original Sentinel-2A multispectral image, and a feature combination scheme is generated after adding the vegetation indices NDVI, EVI, RVI and DVI, which is a dataset including 14 features for the Swin Transformer model to extract dragon fruit. Based on the semantic segmentation of the Swin Transformer model, the dragon fruit is extracted and its effect is compared with other classical deep learning models FCN, Unet and DeepLabV3. The results show that the Swin Transformer model obtains the best extraction accuracy for dragon fruit compared with the compared models.
Starting from the problem that rice extraction from remote sensing images still faces effective feature construction and extraction model, the feature optimization and combined deep learning model are considered. Taking Sentinel-2A image as data source, a multi-dimensional feature data set including spectral features, red edge features, vegetation index, water index and texture features is constructed. The ReliefF-RFE algorithm is used to optimize the features of the data set for rice extraction, and the combined UPerNet-Swin Transformer model is used to extract the rice from the study area based on the optimized features. Comparison with other feature combination schemes and deep learning models demonstrates that: (1) using the optimized features based on the ReliefF-RFE algorithm has the best segmentation effect for rice extraction, which its accuracy, recall rate, F1 score and IoU reach 92.77%, 92.28%, 92.52% and 86.09%, respectively, and (2) compared with PSPNet, Unet, DeepLabv3+ and the original UPerNet models, the combined UPerNet-Swin Transformer model has fewer misclassifications and omissions under the same optimal feature combination schemes, which the F1 score and IoU are increased by 11.12% and 17.46%, respectively
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.