PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 10650 including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A mobile tracking system for the acquisition and tracking of small, long range targets is developed. The system incorporates both visible and infrared imaging assets, along with the associated optics, video distribution, and recording elements. In order to accurately track long range targets, a robust mislevel calibration is required to reduce pointing error, as part of the larger overall system error budget including the pedestal’s total angular error. A mislevel calibration process is presented including a mechanically based coarse level and software based fine level. Determining pedestal orientation to true north is discussed, and the processing of coordinate data is reviewed for a representative tracking system. A method for test and verification of the system calibrations is described both from a subsystem and system level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper compares efficiency measurements to predictions for a digital-holography system operating in the off-axis image plane recording geometry. We use a highly coherent 532 nm laser source, an extended Spectralon object, and an Si focal-plane array to perform digital-holographic detection, which provides access to an estimate of the complex-optical field and is of utility to long-range imaging applications. In the experiments, digitalholographic detection results from the interference of a signal beam with a reference beam. The signal beam was created from the active illumination of the extended Spectralon object and the reference beam from a local oscillator which is split off from the master-oscillator 532 nm laser source. To compare efficiency measurements to predictions, an expression was developed for the signal-to-noise ratio, which contains many multiplicative terms with respect to total-system efficiency. In the best case, the measured total efficiency was 15.2% ± 5.8% as compared to the predicted 16.4%. The results show that the polarization and the fringe-integration efficiency terms play the largest role in the total-system efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many atmospheric turbulence deblurring techniques estimate an inverse filter by making assumptions that constrain the mathematical spaces in which an unknown signal and convolving function must reside. Restoration of scene content after imaging through terrestrial imaging paths is an area of active experimentation and development for both real-time feature extraction and post-process data reduction. Static scenes present opportunities for algorithms that exploit the temporal diversity of the atmospheric path since motion of scene content at the image plane over multiple frames may be attributed to a randomly varying blur kernel. This allows for the estimation of inverse filters that can be used to deblur the image. However, when objects in the scene move relative to one another across multiple image frames it complicates an already computationally demanding process. Techniques to compensate for the motion of one or more features can be used, but if the image fidelity is insufficient to detect a moving feature in the first place or the number of features (e.g. fragmentation from an impact or explosion) is very large, motion compensation techniques may break down or become impractical. In this paper we explore using multiple, synchronized optical systems with sufficient spatial separation to provide the optical path turbulence diversity required by many deblurring algorithms. This reduces or eliminates many constraints on object motion when performing reconstructions. We present deblurred imagery examples from an experimental setup that leverages spatially diverse, optical path turbulence and compare the results with the traditional approach of utilizing single path, temporal diversity when performing image reconstructions. Our results demonstrate that: (1) useful deblurring is possible with a single “set” of images simultaneously collected through diverse optical paths, (2) a combination of temporal and spatial diversity of image collection can be a useful “hybrid” approach, and (3) opportunistic weighting of concurrent frames according to image quality can enhance the deblurring results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper investigates a new computational method for reconstruction and analysis of complex 3D scenes. In the presence of targets, Lidar waveforms usually consist of a series of peaks, whose positions and amplitudes depend on the distances of the targets and on their reflectivities, respectively. Inferring the number of surfaces or peaks, as well as their geometric and colorimetric properties becomes extremely difficult when the number of detected photons is low (e.g., short acquisition time) and the ambient illumination is high. In this work, we adopt a Bayesian approach to account for the intrinsic spatial organization of natural scenes and regularise the 3D reconstruction problem. The proposed model is combined with an efficient Markov chain Monte Carlo (MCMC) method to reconstruct the 3D scene, while providing measures of uncertainty (e.g., about target range and reflectivity) which can be used for subsequent decision making processes, such as object detection and recognition. Despite being an MCMC method, the proposed approach presents a competitive computational cost when compared to state-of-the-art optimization-based reconstruction methods, while being more robust to the lack of detected photons (empty or non-observed pixels). Moreover, it includes a multi-scale strategy which allows a quick recovery of coarse approximations of the 3D structures, while is often sufficient for object detection/recognition. We assess the performance of our approach via extensive experiments conducted with real, long-range (hundreds of meters) single-photon Lidar data. The results clearly demonstrate its benefits to infer complex scene content from extremely sparse photon counts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper uses wave-optics simulations to explore the validity of signal-to-noise models for digital-holographic detection. In practice, digital-holographic detection provides access to an estimate of the complex-optical field which is of utility to long-range imaging applications. The analysis starts with an overview of the various recording geometries used within the open literature (i.e., the on-axis phase shifting recording geometry, the off-axis pupil plane recording geometry, and the off-axis image plane recording geometry). It then provides an overview of the closed-form expressions for the signal-to-noise ratios used for the various recording geometries of interest. This overview contains an explanation of the assumptions used within to write the closed-form expressions in terms of the mean number of photoelectrons associated with both the signal and reference beams. Next, the analysis formulates an illustrative example with weak, moderately deep, and deep turbulence conditions. This illustrative example provides the grounds in which to rewrite the closed-form expressions in terms of the illuminator power. It also enables a validation study using wave-optics simulations. The results show that the signal-to-noise models are, in general, accurate with respect to the percentage error associated with a performance metric referred to as the field-estimated Strehl ratio.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Propagation of optical waves in the atmosphere is influenced by refractive index spatial inhomogeneities resulting from complicated dynamics of air masses. Both large-scale deviations of refractive index (refractivity) and small-scale random refractive index inhomogeneities (turbulence) can significantly impact performance of atmospheric remote sensing systems including both imaging and laser-based electro-optics systems. Typically, in analysis of atmospheric sensing systems only turbulence effects are accounted for. This simplification is justified for only operation at relatively short distances and in absence of strong refractivity layers. In this paper we discuss more general propagation scenarios for which atmospheric refraction can play an important role and could significantly alter the major laser beam and image characteristics. Atmospheric refractivity is described by a combination of the standard MUSA76 and inverse temperature layer models, and atmospheric turbulence effects are accounted for using the classical Kolmogorov turbulence framework with HV57 model for the height profile of the refractive index structure parameter. The numerical analysis demonstrated that both refractivity and turbulence could significantly impact both laser beam propagation and image formation and lead to noticeable anisotropic effects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent times, there has been a growing interest in measuring atmospheric turbulence over long paths. Irradiance based techniques such as scintillometry, suffer from saturation and hence commercial scintillometers have limited operational ranges. In the present work, a method to estimate path weighted Cn2 from turbulence induced random, differential motion of extended features in the time-lapse imagery of a distant target is presented. Since the method is phase based, it can be applied to longer paths. The method has an added advantage of remotely sensing turbulence without the need for deployment of sensors at the target location. The imaging approach uses a derived set of path weighting functions that drop to zero at both ends of the imaging path, the peak location depending on the size of the imaging aperture and the relative sizes and separations of the features whose motions are being tracked. For sub-aperture sized features and separations, the peaks of the weighting functions are closer to the target end of the path. For bigger features and separations, the peaks are closer to the camera end. Using different sized features separated by different amounts, a rich set of weighting functions can be obtained. These weighting functions can be linearly combined to produce a desired weighting function such as that of a scintillometer or that of r0. The time-lapse measurements can thus mimic the measurements of a scintillometer or any other instrument. The method is applied to both simulated and experimentally obtained imagery and some validation results with a scintillometer is shown as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently developed coherent-imaging algorithms using Model-Based Iterative Reconstruction (MBIR) are robust to noise, speckle, and phase errors. These MBIR algorithms produce useful images with less signal which allows imaging distances to be extended. So far, MBIR algorithms have only incorporated simple image models. Complex scenes, on the other hand, require more advanced image models. In this work, we develop an MBIR algorithm for image reconstruction in the presence of phase errors which incorporates advanced image models. The proposed algorithm enables optically-coherent imaging of complex scenes at extended ranges.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Long-range video surveillance is usually limited by the wavefront aberrations caused by atmospheric turbulence, rather than by the quality of the imaging optics or sensor. These aberrations can be mitigated optically by adaptive optics, or corrected post detection by digital video processing. Video processing is preferred if the quality of the enhancement is acceptable, because the hardware is less expensive and has lower size, weight and power (SWaP). Several competing video processing solutions may be employed: speckle imaging with bispectrum processing, lucky imaging, geometric correction and blind deconvolution. Speckle imaging was originally developed for astronomy. It has subsequently been adapted for the more challenging problem of low altitude, slant path, imaging, where the atmosphere is denser and more turbulent. This paper considers a bispectrum-based video processing solution, called ATCOM, which was originally implemented on an i7 CPU and accelerated using a GPU by EM Photonics Ltd. The design has since been adapted in a joint venture with RFEL Ltd to produce a low SWaP implementation based around Xilinx’s Zynq 7045 allprogrammable system-on-a-chip (SoC). This system is called ATACAMA. Bispectrum processing is computationally expensive and, for both ATCOM and ATACAMA, a sub-region of the image must be processed to achieve operation at standard video frame rates. This paper considers how the design may be optimized to increase the size of this region, while maintaining high performance. Finally, use of Xilinx’s next-generation UltraScale+ multiprocessor SoC (MPSoC), which has an embedded Mali-400 GPU as well as an ARM CPU, is explored to further improve functionality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report on a technique for reducing the image degradation introduced by viewing through deep turbulence. The approach uses a variable aperture that was designed to maintain the telescope’s theoretical resolving power. The technique combines the variable aperture sensor with post processing to form a turbulence restored image. Local wavefront tilt is corrected using local image registration. Lucky look processing performed in the frequency domain is used to combine the best aspects of each image in a sequence of frames to form the final image product. The approach was demonstrated on imagery of targets of opportunity on the Boston skyline observed through a 55-mile nearlyhorizontal path from Pack Monadnock in southern New Hampshire. Quantitative assessment of image quality is based on the MTF which is estimated from edges within the images. This is performed for imagery acquired with and without the variable aperture, and the effectiveness of the approach is evaluated by comparing the results. In most cases, the reduced aperture is found to improve performance significantly relative to the full aperture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent work, mitigation of image distortion caused by modified von Karman-type (MVKS) phase turbulence has been investigated by using chaos waves generated via acousto-optic feedback in a Bragg cell whereby the chaos wave transferred to an optical carrier is either transmitted over an image-bearing transparency (with the image reconstructed thereafter using appropriate lensing), or alternatively the image (both spatial as well as dynamic) is used to modulate a chaos wave which is then propagated over the turbulent layer. These investigations have shown that the inherent properties of chaos waves enables reduction or mitigation of the image distortion under various turbulent conditions. In the work being presented here, mitigation of image distortion is explored using propagation through an atmospheric turbulent layer characterized by gamma-gamma type intensity fluctuations. The problem is analyzed under standard weak, moderate and strong turbulence conditions on the basis of the corresponding structure parameters. The relevant probability density functions are generated using small and large-scale eddies (α and β numbers) incorporated into the turbulence model. Stationary images are transmitted under non-chaotic and chaotic conditions, and the corresponding distortions in the received images are measured using the conventional metric of bit error rates (BERs). The system performances under non-chaotic and chaotic transmissions are compared with the intent to establish that packaging a signal within a chaos wave offers a degree of distortion mitigation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modeling, Metrics, and Tools: Joint Session with conferences 10625 and 10650
The design of imaging systems involves navigating a complex trade space. As a result, many imaging systems employ focal plane arrays with a detector pitch that is insufficient to meet the Nyquist sampling criterion under diffraction-limited imaging conditions. This undersampling may result in aliasing artifacts and prevent the imaging system from achieving the full resolution afforded by the optics. Another potential source of image degradation, especially for long-range imaging, is atmospheric optical turbulence. Optical turbulence gives rise to spatially and temporally varying image blur and warping from fluctuations in the index of refraction along with optical path. Under heavy turbulence, the blurring from the turbulence acts as an anti-aliasing filter, and undersampling does not generally occur. However, under light to moderate turbulence, many imaging systems will exhibit both aliasing artifacts and turbulence degradation. Few papers in the literature have analyzed or addressed both of these degradations together. In this paper, we provide a novel analysis of undersampling in the presence of optical turbulence. Specifically, we provide an optical transfer function analysis that illustrates regimes where aliasing and turbulence are both present, and where they are not. We also propose and evaluate a super-resolution (SR) method for combating aliasing that offers robustness to optical turbulence. The method has a tuning parameter that allows it to transition from traditional diffraction-limited SR, to pure turbulence mitigation with no SR. The proposed method is based on Fusion of Interpolated Frames (FIF) SR, recently proposed by two of the current authors. We quantitatively evaluate the SR method with varying levels of optical turbulence using simulated sequences. We also presented results using real infrared imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Limiting resolution is a simple metric that describes the ability of any image system to distinguish small details of an object. Limiting resolution is normally subjectively calculated from the smallest resolvable group and element in a resolution target such as the USAF 1951 target or analytically from the modulation transfer function MTF of the system. Although limiting resolution has limitations, it provides a quick method with low complexity to establish the performance of an imaging system. Various factors affect limiting resolution such as the optical performance of the system and sensor noise, both temporally and spatially. Evaluating the resolution performance of full motion video FMV results in uncertainty in limiting resolution due to the temporal variation of the system. In high performance FMV system where the modulation associated with the limiting resolution is small, the limiting resolution can vary greatly frame to frame. This paper explores how limiting resolution is measured, factors that affect its uncertainty in FMV system, and provides real world examples from airborne video.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Python Based Sensor Model (pyBSM) provides open source functions for modeling electro-optical and infrared imaging systems. In this paper, we validate pyBSM predictions against laboratory measurements. Compared quantities include modulation transfer function, photoelectron count, and signal-to-noise ratio. Experiments are explained and code is provided with the details required to recreate this study for additional camera and lens combinations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As full-motion video (FMV) systems achieve smaller instantaneous fields-of-view (IFOVs), the residual line-of-sight (LOS) motion becomes significantly more influential to the overall system resolving and task performance capability. We augment the AFRL-derived Python-based open-source modeling code pyBSM to calculate distributions of motionbased modulation transfer function (MTF) based on true knowledge of line-of-sight motion. We provide a pyBSMcompatible class that can manipulate either existing or synthesized LOS motion data for frame-by-frame MTF and system performance analysis. The code is used to demonstrate the implementation using both simulated and measured LOS data and highlight discrepancies between the traditional MTF models and LOS-based MTF analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A series of high performance gimbal payloads were developed. Both pan and tilt motion axes are driven by compact, ultrasonic, direct drive piezoelectric motors. The motors provide fast response, high motion stiffness and natural frequency, high precision, high mechanical power density, high torque and consume low power. These features allow improved SWaP parameters and performing active Optical Image Stabilization. The direct drive and high torque enable high angular acceleration, facilitating real-time panoramic imaging by fast move and settle of field of views and creating a mosaic of images. Mosaic is achieved with high optical stabilization level in the presence of vibrations. All the payloads achieve angular acceleration of 70-100 Rad/sec² and high angular velocity, which are enablers for the mosaic generation.
In this paper we report on the structure, operation and imaging performance of three gimbal payloads. Options for attaining motion-supported super-resolution are discussed and demonstrated by fast, sub-pixel, angular scanning of the gimbal.
The combination of mosaic generation and super resolution allows realizing an equivalent of optical zoom or of a large FOV staring vision with outstanding resolution. The refresh rate of the mosaic or the super resolution image is reduced, where mosaic with stabilization is built at a rate of 10 fps and super resolution at an acquisition rate of > 30 fps.
The smallest payload has a diameter of 29 mm, carries either a 5MP day camera or a small 120x160 pixels IR camera, weighs 26 gram and attains a 1mRad angular stabilization.
The second payload has a diameter of 58 mm, carries two cameras (12MP day and 480x640 pixels thermal), weighs 190 gram, has angular resolution of 22 µRad and attains angular stability of 70 µRad.
The biggest payload has a diameter of 114 mm, carries two cameras (12 MP day zoom and 480x640 thermal with optional zoom), weighs 1.2 kg, has angular resolution of 9 µRad attains a stabilization of 50 µRad.
Performance of the 3 payloads will be demonstrated in Stabilization, Mosaic and Super Resolution modes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.