Montana and similar regions contain numerous rivers and lakes that are too small to be spatially resolved by satellites that provide water quality estimates. Unoccupied Aerial Vehicles (UAVs) can be used to obtain such data with much higher spatial and temporal resolution. Water properties are traditionally retrieved from passively measured spectral radiance, but polarization has been shown to improve retrievals of the attenuation-to-absorption ratio to enable calculation of the scattering coefficient for in-water particulate matter. This feeds into improved retrievals of other parameters such as the bulk refractive index and particle size distribution. This presentation will describe experiments conducted to develop a data set for water remote sensing using combined UAV-based hyperspectral and polarization cameras supplemented with in-situ sampling at Flathead Lake in northwestern Montana and the results of preliminary data analysis. A symbolic regression model was used to derive two equations: one relating DoLP, AoP, and the linear Stokes parameters at wavelengths of 440 nm, 550 nm and 660 nm, to chlorophyll-a content, and one relating the same data to the attenuation-to-absorption ratio for 440 nm, 550 nm and 660 nm. Symbolic regression is a machine learning algorithm where the inputs are vectors and the output is an analytic expression, typically chosen by a genetic algorithm. An advantage of this approach is that the explainability of a simple equation can be combined with the accuracy of less explainable models, such as the genetic algorithm.
This project aims to develop a hyperspectral remote sensing approach to detect potato virus Y (pathogenic virus of the family Potyviridae, PVY), from an Unpiloted Aerial Vehicle (UAV). The hyperspectral camera is mounted on the UAV to capture the reflectance of the pixels of the leaves and identify the subtle changes in the color as an indicator of the PVY. The PVY-infected plants tend to have visible mosaic patterns on the leaves, leading to a potential signal for optical detection. Managing the PVY is one of the priorities for the Montana Seed Potato Growers, necessitating the development of a rapid-detection system for PVY. We aim to evaluate if we can detect PVY from a UAV with a radiometrically calibrated hyperspectral sensor to measure upwelling radiance and a calibrated spectrometer to measure downwelling irradiance. We aimed to start with publicly available data from Wageningen University, Netherlands, to build a baseline for our model under controlled lighting. However, we encountered difficulty working with this data, and hope to revisit this portion of the effort in the future.
Unmanned aerial vehicles (UAVs) have enjoyed a meteoric rise in both capability and accessibility—a trend that shows no signs of slowing. This has led to a growing need for detect-and-avoid technologies. These increasingly commonplace events have resulted in the development of a number of UAV detection methods, most of which are based on either radar, acoustics, visual, passive radio-frequency, or lidar detection technology. With regards to software, many of these UAV detection systems have begun to implement machine learning (ML) as a means to improve detection and classification capabilities. In this work, we detail a new lidar and ML-based propeller rotation analysis and classification method using a wingbeat-modulation lidar system. This system has the potential to sense characteristics, such as propeller speed and pitch, that other systems struggle to detect. This paper is an exploration into the preliminary development of our method, and into its potential capabilities and limitations. Using this method, propeller speed could be detected with a worst-case percent error of approximately 3.7% and an average percent error of approximately 2% when the beam was positioned on the propeller. Furthermore, Wide Neural Networks were able to accurately detect and characterize propeller signals when trained to determine either beam position or propeller orientation.
Computer vision algorithms can quickly analyze numerous images and identify useful information with high accuracy. Recently, computer vision has been used to identify 2D materials in microscope images. 2D materials have important fundamental properties allowing for their use in many potential applications, including many in quantum information science and engineering. In order to use these materials for research and product development, single-layer 2D crystallites must be prepared through an exfoliation procedure and then identified using reflected light optical microscopy. Performing these searches manually is a time-consuming and tedious task. Deploying deep learning-based computer vision algorithms for 2D material search can automate the flake detection task with minimal need for human intervention. In this work, we have implemented a new deep learning pipeline to classify crystallites of 2D materials based on coarse thickness classifications in reflected-light optical micrographs. We have used DetectorRS as the object detector and trained it on 177 images containing hexagonal boron nitride (hBN) flakes of varying thickness. The trained model achieved a high detection accuracy for the rare category of thin flakes (< 50 atomic layers thick).
In recent years, lidar-based remote sensing has been used for detecting and classifying flying insects, which is based upon the fact that oscillating wings produce a modulated return signal; oscillations from other objects, such as helicopters or drones, might also be detected in a similar manner. Several groups have successfully used machine learning to classify insects in laboratory settings, but data processing in field studies is still performed manually. Compared to laboratory studies, field studies pose additional challenges, such as non-stationary background clutter and high class imbalance. The models we used for detection and classification were the common boosting algorithm AdaBoost, a hybrid sampling/boosting algorithm RUSBoost, and a neural network with a single hidden layer. Previously, we found that the best performances came from the neural network and AdaBoost. In this paper, we test the machine learning models that have been trained on field data collected from Hyalite Creek on other unlabeled field data; in doing so, we demonstrate each model’s ability to detect insects in data from new, unseen environments. We Use labels created by a domain expert to manually check how many of the predicted images actually contained insects.
KEYWORDS: MATLAB, Field programmable gate arrays, Simulink, LIDAR, Feature extraction, Neural networks, Digital signal processing, Machine learning, Algorithm development
Real-time monitoring of insects has important applications in entomology, such as managing agricultural pests and monitoring species populations—which are rapidly declining. However, most monitoring methods are labor intensive, invasive, and not automated. Lidar-based methods are a promising, non-invasive alternative, and have been used in recent years for various insect detection and classification studies. In a previous study, we used supervised machine learning to detect insects in lidar images that were collected near Hyalite Creek in Bozeman, Montana. Although the classifiers we tested successfully detected insects, the analysis was performed offline on a laptop computer. For the analysis to be useful in real-time settings, the computing system needs to be an embedded system capable of computing results in real-time. In this paper, we present work-in-progress towards implementing our software routines in hardware on a field programmable gate array.
Airborne lidar data for fishery surveys often do not contain physics-based features that can be used to identify fish; consequently, the fish must be manually identified, which is a time-consuming process. To reduce the time required to identify fish, supervised machine learning was successfully applied to lidar data from fishery surveys to automate the process of identifying regions with a high probability of containing fish. Using data from Yellowstone Lake and the Gulf of Mexico, multiple experiments were run to simulate real-world scenarios. Although the human cannot be fully removed from the loop, the amount of data that would require manual inspection was reduced by 61.14% and 26.8% in the Yellowstone Lake and Gulf of Mexico datasets, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.