We present MangoGAN, a general adversarial network (GAN)-based deep learning semantic segmentation model for the detection of mango tree crowns in remotely sensed aerial images. The aerial images are acquired by low-altitude remote sensing carried out using a quadrotor unmanned aerial vehicle in a mango orchard. Aerial images are acquired with a vision spectrum optical sensor, also popularly known as RGB images as the payload. MangoGAN is trained on 1430 images patches of size 240 × 240 pixels. The testing was carried out on 160 images. Results are analyzed using the precision, recall, F1 parameters derived from contingency matrix and by visualization using Gradcam method. The performance of the MangoGAN is compared with peer architectures trained on the same data. MangoGAN outperforms its peer architectures
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.