An extension of the fusion of interpolated frames superresolution (FIF SR) method to perform SR in the presence of atmospheric optical turbulence is presented. The goal of such processing is to improve the performance of imaging systems impacted by turbulence. We provide an optical transfer function analysis that illustrates regimes where significant degradation from both aliasing and turbulence may be present in imaging systems. This analysis demonstrates the potential need for simultaneous SR and turbulence mitigation (TM). While the FIF SR method was not originally proposed to address this joint restoration problem, we believe it is well suited for this task. We propose a variation of the FIF SR method that has a fusion parameter that allows it to transition from traditional diffraction-limited SR to pure TM with no SR as well as a continuum in between. This fusion parameter balances subpixel resolution, needed for SR, with the amount of temporal averaging, needed for TM and noise reduction. In addition, we develop a model of the interpolation blurring that results from the fusion process, as a function of this tuning parameter. The blurring model is then incorporated into the overall degradation model that is addressed in the restoration step of the FIF SR method. This innovation benefits the FIF SR method in all applications. We present a number of experimental results to demonstrate the efficacy of the FIF SR method in different levels of turbulence. Simulated imagery with known ground truth is used for a detailed quantitative analysis. Three real infrared image sequences are also used. Two of these include bar targets that allow for a quantitative resolution enhancement assessment.
The design of imaging systems involves navigating a complex trade space. As a result, many imaging systems employ focal plane arrays with a detector pitch that is insufficient to meet the Nyquist sampling criterion under diffraction-limited imaging conditions. This undersampling may result in aliasing artifacts and prevent the imaging system from achieving the full resolution afforded by the optics. Another potential source of image degradation, especially for long-range imaging, is atmospheric optical turbulence. Optical turbulence gives rise to spatially and temporally varying image blur and warping from fluctuations in the index of refraction along with optical path. Under heavy turbulence, the blurring from the turbulence acts as an anti-aliasing filter, and undersampling does not generally occur. However, under light to moderate turbulence, many imaging systems will exhibit both aliasing artifacts and turbulence degradation. Few papers in the literature have analyzed or addressed both of these degradations together. In this paper, we provide a novel analysis of undersampling in the presence of optical turbulence. Specifically, we provide an optical transfer function analysis that illustrates regimes where aliasing and turbulence are both present, and where they are not. We also propose and evaluate a super-resolution (SR) method for combating aliasing that offers robustness to optical turbulence. The method has a tuning parameter that allows it to transition from traditional diffraction-limited SR, to pure turbulence mitigation with no SR. The proposed method is based on Fusion of Interpolated Frames (FIF) SR, recently proposed by two of the current authors. We quantitatively evaluate the SR method with varying levels of optical turbulence using simulated sequences. We also presented results using real infrared imagery.
We present a numerical wave propagation method for simulating imaging of an extended scene under anisoplanatic conditions. While isoplanatic simulation is relatively common, few tools are specifically designed for simulating the imaging of extended scenes under anisoplanatic conditions. We provide a complete description of the proposed simulation tool, including the wave propagation method used. Our approach computes an array of point spread functions (PSFs) for a two-dimensional grid on the object plane. The PSFs are then used in a spatially varying weighted sum operation, with an ideal image, to produce a simulated image with realistic optical turbulence degradation. The degradation includes spatially varying warping and blurring. To produce the PSF array, we generate a series of extended phase screens. Simulated point sources are numerically propagated from an array of positions on the object plane, through the phase screens, and ultimately to the focal plane of the simulated camera. Note that the optical path for each PSF will be different, and thus, pass through a different portion of the extended phase screens. These different paths give rise to a spatially varying PSF to produce anisoplanatic effects. We use a method for defining the individual phase screen statistics that we have not seen used in previous anisoplanatic simulations. We also present a validation analysis. In particular, we compare simulated outputs with the theoretical anisoplanatic tilt correlation and a derived differential tilt variance statistic. This is in addition to comparing the long- and short-exposure PSFs and isoplanatic angle. We believe this analysis represents the most thorough validation of an anisoplanatic simulation to date. The current work is also unique that we simulate and validate both constant and varying Cn2(z) profiles. Furthermore, we simulate sequences with both temporally independent and temporally correlated turbulence effects. Temporal correlation is introduced by generating even larger extended phase screens and translating this block of screens in front of the propagation area. Our validation analysis shows an excellent match between the simulation statistics and the theoretical predictions. Thus, we think this tool can be used effectively to study optical anisoplanatic turbulence and to aid in the development of image restoration methods.
Imagery acquired with modern imaging systems is susceptible to a variety of degradations, including blur from the point
spread function (PSF) of the imaging system, aliasing from undersampling, blur and warping from atmospheric
turbulence, and noise. A variety of image restoration methods have been proposed that estimate an improved image by
processing a sequence of these degraded images. In particular, multi-frame image restoration has proven to be a
particularly powerful tool for atmospheric turbulence mitigation (TM) and super-resolution (SR). However, these
degradations are rarely addressed simultaneously using a common algorithm architecture, and few TM or SR solutions
are capable of performing robustly in the presence of true scene motion, such as moving dismounts. Still fewer TM or
SR algorithms have found their way into practical real-time implementations. In this paper, we describe a new L-3 joint
TM and SR (TMSR) real-time processing solution and demonstrate its capabilities. The system employs a recently
developed versatile multi-frame joint TMSR algorithm that has been implemented using a real-time, low-power FPGA
processor system. The L-3 TMSR solution can accommodate a wide spectrum of atmospheric conditions and can
robustly handle moving vehicles and dismounts. This novel approach unites previous work in TM and SR and also
incorporates robust moving object detection. To demonstrate the capabilities of the TMSR solution, results using field
test data captured under a variety of turbulence levels, optical configurations, and applications are presented. The
performance of the hardware implementation is presented, and we identify specific insertion paths into tactical sensor
systems.
The detectors within an infrared focal plane array (FPA) characteristically have responses that vary from detector to
detector. It is desirable to remove this "nonuniformity" for improved image quality. Factory calibration is not sufficient
since nonuniformity tends to drift over time. Field calibration can be performed using uniform temperature sources but
requires briefly obscuring the field-of-view and leads to additional system size and cost. Alternative "scene-based"
approaches are able to utilize the normal scene data when performing non-uniformity correction (NUC) and therefore do
not require the field-of-view to be obscured. These function well under proper conditions but at times can introduce
image artifacts such as "ghosting". Ghosting results when scene conditions are not optimal for NUC. The scene-based
approach presented in this paper estimates a correction term for each detector using spatial information. In parallel,
motion estimation and texture features are used to identify frames and regions within frames that are suitable for NUC.
This information is then employed to adaptively converge to the proper correction terms for each detector in the FPA.
The presence of parasitic jitter in video sequences can degrade imaging system performance. Image stabilization systems correct for this jitter by estimating motion and then compensating for undesirable movements. These systems often require tradeoffs between stabilization performance and factors such as system size and computational complexity. This paper describes the theory and operation of an electronic image stabilization technique that provides sub-pixel accuracy while operating at real-time video frame rates. This technique performs an iterative search on the spatial intensity gradients of video frames to estimate and refine motion parameters. Then an intelligent segmentation approach separates desired motion from undesired motion and applies the appropriate compensation. This computationally efficient approach has been implemented in the existing hardware of compact infrared imagers. It is designed for use as both a standalone stabilization module and as a part of more complex electro-mechanical stabilization systems. For completeness, a detailed comparison of theoretical response characteristics with actual performance is also presented.
KEYWORDS: Image enhancement, Image processing, Algorithm development, Video processing, Color and brightness control algorithms, Infrared imaging, Detection and tracking algorithms, Electronics, Imaging systems, Video
The MWIR imaging systems developed by L-3 Communications Cincinnati Electronics (L-3 CE) include several video processing algorithms designed to provide enhanced imagery that meets a variety of military and other application requirements. When IR imaging systems are confronted with varying IR conditions, video processing algorithms are designed and selected to optimize human interpretation of specific scene details. The Visual Difference Predictor model has been used and a derived Image Enhancement Score has been developed to provide an objective metric to evaluate the effects of processing algorithms on imagery. Comparing the Image Enhancement Score of the processed image gives an objective measure of the success of the video processing algorithm being evaluated. This paper will describe selected algorithms in the L-3 CE Video Processing Suite, evaluate them against several test scenes and present associated Image Enhancement Scores. These will include a novel local contrast enhancement, general sharpening, and display mapping algorithms. Finally, the direction of ongoing and future efforts in Video Processing Suite development will be discussed.
Common infrared video imagery can experience large variations in signal level across different portions of a scene. Global image processing techniques are not capable of using standard displays to show both large variations and detail within individual regions of interest. For this reason, local image processing approaches have been developed to increase contrast in localized areas. These are typically high latency, post-video techniques targeted for specific applications. We have developed a unique video processing approach that has near-zero latency and is not computationally intensive, so imagery can be processed and displayed for real-time human observation using minimal hardware. Local scaling factors are computed using a flexible distribution technique, allowing adjustable levels of sensitivity and local detail enhancement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.