PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 11393, including the Title Page, Copyright information, and Table of Contents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent analyses have yielded non-parametric techniques that perform automatic compensation of defocusing effects in synthetic aperture radar (SAR) imagery, which arise from atmospheric refraction of the radar waveforms. The temporal variation of the delay and bending arising from atmospheric refraction can be modeled using power- law spectra. The present study investigates the effects of applying different values of the perturbation amplitude of the refraction-induced fluctuations. This investigation reveals that the subject refraction-based autofocus techniques give sharp scene refocus when applied to measured Ku-band SAR image data for various values of the perturbation amplitude of the modeled power law spectra.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we apply deep learning methods to improve image reconstruction from angularly sparse data in Computed Tomography (CT) and SAR imaging. In CT, image reconstruction from sparse views is desirable to reduce X-ray exposure for patients, improving reconstruction time. It is also desirable to reduce the number of pulses used to reconstruct far-field objects in SAR imaging. Conventional algorithms must often incorporate a priori knowledge while successful approaches such as total variation regularization (TV) are limited to signal-tonoise ratio ranges cannot match the inconsistencies of practical application.1 Instead, we propose to formulate the image reconstruction problem as an optimization problem. In this approach, a recurrent neural network (RNN) is used to unfold a given, fixed number of iterations of an iterative solver. We verify the performance of our method using numerical data and compare it with more traditional approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The recent surge in the application of millimeter-wave sensing for public security has been accompanied by deployments of whole-body scanners at airports and stations. Some exiting imaging apparatuses using a synthetic aperture which requires a large number of measurements to meet the requirement of half-wavelength spacing and thus put a stringent requirement on hardware design. In this paper, we proposed a novel sparse synthetic algorithm that applies a multistatic scheme to the coprime measurements. It replaces every monostatic radar by a pair of separated transmitter and receiver along with phase corrections. Due to the multiplexing of all transmitters and receivers, the multistatic scheme further reduces the number of measurements and the amount of data to about 0.3% of the standard SAR. The efficacy of the proposed method is demonstrated using simulations and experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic Aperture Radar (SAR) images and volumes may be produced through multiple antenna or multiple pass scenarios. SAR imaging runs a gamut of techniques from convolution of the recieved data with the modeled SAR aperture impulse response to correlation of the spherical wave function with the recieved wave. The former has the advantage that it produces an orthogonal point spread function in each dimension allowing easy decomposition into exponential function model of a waves. The convolution however is only valid around the aim point for a spotlight image. Modi cations presented here extend the applicable envelope of use. All potential procedures and their sequence of operations are presented and examined for valued aspects including the amenability to decomposition and its validity. The extension from 2D SAR to 3D SAR may also be a¤orded to the production of 2D SAR techniques ranging between the convolution and correlation and intervening techniques with varying measures of success. As in other SAR data collections the aperture may be subsampled with imaging resolution and coverage implications. This range of use including applicable aperture size, sampling rate and squint are explored for 2D and 3D scenarios. 2D and 3D impulse response functions their accuracy and its extent throughout the image or volume are calculated. Example images and SAR image volumes are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D scene reconstruction provides an improved representation from which features of critical objects or targets may be extracted. Both electro-optical (EO) and synthetic aperture radar (SAR) sensors have been exploited for this purpose, but each modality possesses issues resulting in different sources for reconstruction errors. Reconstruction from EO data is limited by frame rate and can be blurred by moving targets or optical distortions in the lens, which leads to errors in the 3D model. Meanwhile, SAR offers the opportunity to correct from some of these errors through its capacity for making range measurements, even under clouds or during nighttime, when EO data would not be available. Conversely, SAR imagery lacks the texture offered by optical images and is more sensitive to perspective, while moving targets can likewise result in reconstruction errors. This work aims at exploiting the strengths of both modalities to reconstruct 3D scenes from multi-sensor EO-SAR data. In particular, we consider the fusion of multi-pass Gotcha SAR data with a modeled EO-data for the particular scene. We propose a framework that fuses 2D image maps acquired from airborne EO data as well as airborne SAR, which leverages the range information of SAR and object shape information of EO imagery. From an initial 2D image of the scene, with each additional sources of sensor data (EO or SAR), a 3D reconstruction is formed that is iteratively improved. This approach allows for the potential to achieve robust and real-time 3D representations as a basis for 4D surveillance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the past decades, there have been many approaches to synthetic aperture radar (SAR) automatic target recognition (ATR). ATR includes detection, classification, and identification of targets, scene, and context. Recently, the explosion of methods for deep learning has attracted numerous researchers to compare machine learning methods for SAR ATR. This paper reviews many approaches conducted for SAR recognition and discerns the most promising approaches. Using the Moving and Stationary Target Acquisition and Recognition (MSTAR) data set, there are comparative methods to evaluate the advances from the community. The paper reviews many of the available techniques recently published to determine the state of the art in emerging concepts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present experiments to explore the use of deep neural network classification models for estimating the orientation of objects with linear structures from polarimetric radar data. We derive all radar data from two physical model aircraft and their corresponding computerized surface models. We make extensive use of synthetic pre- diction to help fully span the large parameter space as is consistent with best practice. Synthetic predictions are based upon a linear quad-polarized (H: horizontal, V: vertical) Ka-band stepped frequency measurement inverse synthetic aperture radar (ISAR) turntable system located inside the Air Force Research Laboratory (AFRL) Sensor Directorate's Indoor Range. The use of multiple polarimetric channels in a deep learning classification framework are shown to significantly help estimate orientation when the co-polarization channels significantly differ from each other. Future research directions are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deep learning classifiers, particularly, Convolutional Neural Networks (CNNs), have been demonstrated to be very effective in the area of SAR automatic target recognition (ATR). Despite of this achievement, there is still a problem with proper classification of target objects from their speckled SAR imagery. In this paper, we address this technical challenge by implementing a two-step Hybrid Stacked Denoising Auto-Encoder (HSDAE) as an effective holistic denoiser and classifier model. Since there is no publically available comprehensive real or synthetic SAR dataset of aerial vehicles, we primarily employed the IRIS Electromagnetic modeling and simulation system to generate the required synthetic noisy SAR images from an array of test physics-based CAD models placed in different operating environments. Our generated test dataset contains synthetically generated SAR images of more than 300 aerial and ground vehicles. These images are systematically scanned from various azimuth and elevation angles as well as from different ranges and in different operating environments. They are regarded as the ground truth object radiation backscattering reflectivity map of test objects. Furthermore, these images are modulated with appropriate additive multiplicative noise to form speckled SAR images. Using a partial collection of ground-truth test vehicles images along with their corresponding speckled SAR images, we train a two-step concurrent denoising auto encoder followed by a CNN model to classify vehicles. Through the initial step, a denoising operation in performed and the test objects’ features like shape, size, and orientation attributes are recovered from any given input speckled SAR images. The output image from this denoising process is next passed as input to a CNN classifier for performing object recognition and classification. In this paper, we presented the architecture of HSDAE and its variants and compare their performances. Our results indicate the proposed HSDAE meets higher accuracy and repeatability for recognizing and classifying the target objects under different operating conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Machine learning techniques such as convolutional neural networks have progressed rapidly in the past few years, propelled by their rampant success in many areas. Convolutional networks work by transforming input images into compact representations that cluster well with the representations of related images. However, these representations are often not human-interpretable, which is unsatisfying. One field of research, image saliency, attempts to show where in an image a trained network is looking to obtain its information. With this method, well-trained networks will reveal a focus on the object matching the label and ignore the background or other objects. We train and test neural networks on synthetic SAR imagery and use image saliency techniques to investigate the areas of the image on which the network is focused. Doing so should reveal whether the network is using relevant information in the image, such as the shape of the target. We test various image saliency techniques and classification networks, then measure and comment on the resulting saliency results to gain insight into what the networks learn on simulated SAR data. This investigation is designed to serve as a tool for evaluating future SAR target recognition machine learning algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic Aperture Radar (SAR) technology offers innovative remote sensing opportunity for the area of surveillance applications. However, for the Automatic Target Recognition (ATR) of aerial and ground vehicles from SAR data, there is a need for large-scale imagery of the target objects of interest (TOI’s) from different perspective viewing angles – that is rarely available publically. Such large datasets can be very instrumental for the initial training of deep learning classifiers as well as for the achievement of improved transfer learning. In this paper, we address this shortcoming by introducing IRIS Electromagnetic (EM) modeling and simulation system for virtual staging and automatic generation of realistic synthetic (i.e. simulated) multi0perspective SAR imagery of the test vehicles for the purpose of training of ATR classifiers. Primarily, we prepared a collection of 250 physics-based CAD models containing different aerial and ground vehicles objects. A fourstep process was implemented. In the first step, an optimized multi-path ray-tracing technique was developed for obtaining the synthetic EM radiation backscattering reflectivity patterns of the test objects. In the second step, we furnish the synthetically generated SAR images with different backgrounds (e.g. ground, grass, and asphalt) by employing appropriate noise modulation transfer functions. In the third step, we introduced a method for projecting directional test objects’ shadows from eight different perspective viewings. In the final step, the surface regions producing high-strength radiation backscatterings were highlighted to further enhance realism of the synthetically generated SAR images. To test and verify the validity and dependability of this proposed approach, we compared our simulated SAR imagery results against a number of comparable military and commercial vehicles from MSTAR dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic Aperture Radar (SAR) is a critical sensing technology that is notably independent of the sensorto- target distance and has numerous cross-cutting applications, e.g., target recognition, mapping, surveillance, oceanography, geology, forestry (biomass, deforestation), disaster monitoring (volcano eruptions, oil spills, ooding), and infrastructure tracking (urban growth, structure mapping). SAR uses a high-power antenna to illuminate target locations with electromagnetic radiation, e.g., 10GHz radio waves, and illuminated surface backscatter is sensed by the antenna which is then used to generate images of structures. Real SAR data is difficult and costly to produce and, for research, lacks a reliable source ground truth. Few SAR software simulators are available and even less are open source and can be validated. This article proposes a open source SAR simulator to compute phase histories for arbitrary 3D scenes using newly available ray-tracing hardware made available commercially through the NVIDIA's RTX graphics cards series. The OptiX GPU ray tracing library for NVIDIA GPUs is used to calculate SAR phase histories at unprecedented computational speeds. The simulation results are validated against existing SAR simulation code for spotlight SAR illumination of point targets. The computational performance of this approach provides orders of magnitude speed increases over CPU simulation. An additional order of magnitude of GPU acceleration when simulations are run on RTX GPUs which include hardware specifically to accelerate OptiX ray tracing. The article describes the OptiX simulator structure, processing framework and calculations that afford execution on massively parallel GPU computation device. The shortcoming of the OptiX library's restriction to single precision oat representation is discussed and modifications of sensitive calculations are proposed to reduce truncation error thereby increasing the simulation accuracy under this constraint.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In support of airborne radar detection missions that rely on Synthetic Aperture Radar (SAR) imagery, there is a need for extensive sets of training data. Due to a paucity of measured data from some targets of interest, there is sometimes a need to train on only simulated SAR data, and yet detect live targets with high confidence during testing. In support of this mission, many researchers have applied a variety of mathematical techniques to simulate data sets. These techniques range from template matching and simpler statistical methods to deep neural networks (DDNs). They demonstrate that with proper pre-processing, some of these methods can achieve target detection with apparently high confidence. However, for all these papers there is no exact measurement of the differences or similarities in the simulated and measured data that would provide a good predictor of the margins between decision boundaries. Thus, this paper has developed a combination of pre-processing methods and standard metrics that enable the assessment of simulated data quality independent of which target recognition algorithm will be utilized. The results show that for some pre-processing methods the differences in simulated data and measured data do not always lend themselves to the desired ability to train on simulated SAR imagery and test on measured SAR imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study wishes to apply a non-parametric estimate of the Bayes Error Rate (BER) to current Air Force problems related to target classification. Whether they be neural networks, autoencoders, or other architectures, classifiers are commonly assessed through confusion matrices and associated statistics, or visualizations of feature spaces like t-SNE plots. However, these methods depend on the test data used to assess the performance of the network, not the robustness of the classifier itself. This research incorporates a different statistic that estimates the BER, or the probability of misclassification given some data, to serve as an upper bound for potential classifier performance. This estimate leverages a Friedman-Rafsky test statistic: the number of cross labels in a minimum-spanning tree (MST) through points in the feature space. The first part of this study examines the behavior of the BER estimate over a general learning process, such as different epochs of the training process in a neural network. The second part of the study examines whether certain factors affect the separability of synthetic aperture radar (SAR) images of targets of interest. Given the fact that it is often difficult and expensive to survey real targets and generate SAR images, 3-D CAD models are frequently used to generate synthetic SAR images. However, given that many resources are devoted to perfecting these models, this study applies the BER estimate to examine whether minute changes to CAD models affect separability in the image domain. The results seem to indicate that, if the topology of the target is maintained in the CAD domain, low-fidelity versions of targets (with 25% the number of faces of highly accurate models), exhibit separability and ability to be correctly classified identical to their high-fidelity counterparts. The BER estimation also shows promising applications in other domains, serving as a way to describe the underlying structure of feature spaces intuitively but effectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Usability of automatic target recognition (ATR) systems requires performance evaluation with measures of performance (MOPs) and measures of effectiveness (MOEs). MOEs support an external proficiency review, while there is a need for internal MOEs which are a form of self-proficiency. Self-proficiency of people is well known for job selection, performance, and productivity. Likewise, there is a need for self-proficiency determination of emerging data analytics techniques such as from artificial intelligence, machine learning (ML), signal processing, and data fusion. The coordination of machines with humans for autonomy requires external and internal proficiency assessment. In this paper, a discussion on self-proficiency assessment for ATR analysis is provided to enhance human and machine awareness and performance assessment. An example comes from the Moving and Stationary Target Acquisition and Recognition (MSTAR) data set with human-machine proficiency analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Challenge problems consideration for the 7 habits of highly effective ATRs
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.