Hyperspectral sensors are used to measure the electromagnetic spectrum in hundreds of narrow and contiguous spectral bands. The recorded data exhibits characteristic features of materials and objects. For tasks within the security and defense domain, this valuable information can be gathered remotely using drones, airplanes or satellites. In 2021, we conducted an experiment in Ettlingen, Germany, using a drone-borne hyperspectral sensor to record data of various camouflage setups. The goal was the inference of camouflage detection limits from typical hyperspectral data evaluation approaches for different scenarios. The experimental site is a natural strip of vegetation between two corn fields. Our main experiment was a camouflage garage that covered different target materials and objects. The distance between the targets and the roof of the camouflage garage was modified during the experiment. Together with the target variations, this was done to determine the material dependent detection limits and the transparency of the camouflage garage. Another experiment was carried out using two different types of camouflage nets in various states of occlusion by freshly cut vegetation. This manuscript contains a detailed experiment description, as well as, the first results of the camouflage transparency and occlusion experiment. We show that it is possible to determine the target inside the camouflage garage and that vegetation cover is not suitable additional camouflage for hyperspectral sensors.
This paper shows three experiments from our HyperGreding’19 campaign that combine multi-temporal hyperspectral data to address several essential questions in target detection. The experiments were conducted over Greding, Germany, using a Headwall VNIR/SWIR co-aligned sensor mounted on a drone with a flight altitude of 80 m. Additionally, high-resolution aerial RGB data, GPS measurements, and reference data from a field spectrometer were recorded to support the hyperspectral data pre-processing and the evaluation process for the individual experiments. The focus of the experiments is the detectability of camouflage materials and camouflaged objects. When the goal is to transfer hyperspectral analysis to a practical setting, the analysis must be robust regarding realistic and changing conditions. The first experiment investigates the SAM and the SAMZID approaches for change detection to demonstrate their usefulness for target detection of moving objects within the recorded scene. The goal is to eliminate unwanted changes like shadow areas. The second experiment evaluates the detection of different camouflage net types over two days. This includes camouflage nets in shadows during one flight and brightly illuminated in another due to varying solar elevation angles during the day. We demonstrate the performance of typical hyperspectral target detection and classification approaches for robust detection under these conditions. Finally, the third experiment aims to detect objects and materials behind the cover of camouflage nets by using a camouflage garage. We show that some materials can be detected using an unmixing approach.
In the last decades, the amount of data obtained from electro-optical sensor systems has been steadily increasing in remote sensing (RS). Manual analysis of remote sensing images is a time-consuming task. Therefore, machine learning methods for detection and classification have become an appealing field of RS. In particular, the family of region convolutional neural networks (R-CNN) shows considerable success in different RS tasks. Advanced RCNN methods are multistage approaches, where first objects are detected and secondly classified with an optional segmentation step. However, the detection performance of advanced R-CNN algorithms suffers in areas with noticeably varying object densities and scales. Advanced R-CNN architectures usually consist of a detector stage and multiple heads. In the detector stage, regions of interest (ROI) are proposed and filtered by a non-maximum suppression (NMS) layer. In an area with a high density of objects, a strictly adjusted NMS may lead to missed detections. In contrast, a low threshold value for NMS can cause multiple overlapping detections for large objects. To address this challenge, we present our approach improving the results of object detector methods in scenes with varying densities of objects. Therefore, we add an encoder-decoder based density estimation network to our detector network to obtain the location of high-density areas. For these locations, additional fine detection of objects is performed. In order to exhibit the effectiveness of our approach, we evaluate our method on common crowd counting and object detection datasets.
A near real-time airborne 3D scanning system has been successfully implemented at Fraunhofer IOSB. This remote sensing system consists of a small aircraft and a ground control station. The aircraft is equipped with the following components: a Digital Acquisition System (DAQ), Inertial Navigation and Global Positioning Systems (INS/GPS), an Airborne Laser Scanner (ALS), and four industrial cameras. Two of these cameras (RGB and near-infrared ones) are nadir oriented, while the other two RGB cameras have an oblique orientation. The acquired LiDAR point clouds, images, and corresponding metadata are sent from the aircraft to the ground control station for further post-processing procedures, such as radiometric correction, boresight correction, and point cloud generation from images.
In this paper, the procedure regarding point cloud generation of urban scenes, with images from the nadir RGB camera, is described in detail. To produce dense point clouds three main steps are necessary: generation of disparity maps, creation of depth maps, and calculation of world coordinates (X, Y, and Z).
To create disparity maps, two adjacent images (stereopair) were rectified. Afterwards, the PatchMatch Stereo (PMS) algorithm for 3D reconstruction was executed, since it is easy to implement and provides good results according to the Middlebury Computer Vision dataset. Some steps were parallelized to optimize execution speed. Since depth is inversely proportional to disparity, depth maps were calculated from disparity maps. The height of scene elements Z was obtained by subtracting their depth to the camera height.
To calculate the remaining world coordinates X and Y, the back-projection equation and the camera intrinsic and extrinsic parameters were used. To validate the PMS algorithm, its resulting point cloud was compared with a LiDAR point cloud and a PhotoScan point cloud. The root mean square errors of both comparisons showed similar values.
2.5D terrain model generation from a data stream provides high quality data, which can be used for assisting situational awareness, conducting operations and training in simulated environments. The objective of our research is to design and implement a real-time texturing and visualization of a 2.5D terrain model from live LiDAR and a RGB data streaming in a high performance remote sensing workflow.
To achieve real-time processing, the incoming data streams are evaluated in small patches. In addition, the calculation time per patch must be lower than the recording/sampling time to ensure a real-time processing. Data meshing and projection of the images onto the mesh cannot be implemented in real-time using an off-the-shelf CPU. However, most of these steps are highly vectorizable (e.g., the projection of each LiDAR point into the camera images). In fact, modern graphics cards are highly specialized in computing such data types. Therefore, all computationally intensive steps were performed in the graphics card. Most of the steps for the terrain model generation have been implemented in CUDA and OpenCL. We compare both technologies regarding calculation times and memory management. The fastest technology was selected for each calculation step. Since the model generation is faster than the data acquisition time, the implemented software is real-time.
Our approach has been embedded and tested in a real-time system consisting of a modern reconnaissance system connected to a ground control station via a radio link. During a flight, a human operator in the ground control station is able to observe a texturized terrain model, which was recently generated. The user is able to zoom in an interesting area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.