Special Section on Long-Range Imaging

Block matching and Wiener filtering approach to optical turbulence mitigation and its application to simulated and real imagery with quantitative error analysis

[+] Author Affiliations
Russell C. Hardie

University of Dayton, Department of Electrical and Computer Engineering, 300 College Park, Dayton, Ohio 45459-0232, United States

Michael A. Rucci, Barry K. Karch

Air Force Research Laboratory, Building 620, 2241 Avionics Circle, Wright-Patterson Air Force Base, Ohio 45433, United States

Alexander J. Dapore

L-3 Communications Cincinnati Electronics, 7500 Innovation Way, Mason, Ohio 45040-9699, United States

Opt. Eng. 56(7), 071503 (Feb 06, 2017). doi:10.1117/1.OE.56.7.071503
History: Received October 7, 2016; Accepted December 29, 2016
Text Size: A A A

Open Access Open Access

Abstract.  We present a block-matching and Wiener filtering approach to atmospheric turbulence mitigation for long-range imaging of extended scenes. We evaluate the proposed method, along with some benchmark methods, using simulated and real-image sequences. The simulated data are generated with a simulation tool developed by one of the authors. These data provide objective truth and allow for quantitative error analysis. The proposed turbulence mitigation method takes a sequence of short-exposure frames of a static scene and outputs a single restored image. A block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged, and the average image is processed with a Wiener filter to provide deconvolution. An important aspect of the proposed method lies in how we model the degradation point spread function (PSF) for the purposes of Wiener filtering. We use a parametric model that takes into account the level of geometric correction achieved during image registration. This is unlike any method we are aware of in the literature. By matching the PSF to the level of registration in this way, the Wiener filter is able to fully exploit the reduced blurring achieved by registration. We also describe a method for estimating the atmospheric coherence diameter (or Fried parameter) from the estimated motion vectors. We provide a detailed performance analysis that illustrates how the key tuning parameters impact system performance. The proposed method is relatively simple computationally, yet it has excellent performance in comparison with state-of-the-art benchmark methods in our study.

Figures in this Article

When imaging at long ranges through the atmosphere, the acquired images are highly susceptible to degradations from atmospheric turbulence.13 Fluctuations in the index of refraction along the optical path length, driven by temperature and pressure variations, give rise to spatially and temporally varying blur and warping. This is a well-researched area with respect to astronomical imaging.3 In astronomical imaging, with narrow fields of view, the degradation caused by the atmosphere can usually be modeled as isoplanatic. That is, the atmospheric effects are uniform across the image. This gives rise to warping that is a global image shift, and the blurring can be modeled with a spatially invariant point spread function (PSF). Wide field-of-view imaging, at long ranges through the atmosphere, generally leads to anisoplanatic imaging conditions. Here, the atmospheric PSF varies significantly across the field of view of the imaging sensor.

Adaptive optics has proven to be effective in treating the isoplanatic problem.4 However, for the anisoplanatic problem, mitigation methods are generally based on acquiring and digitally processing a sequence of short-exposure (SE) images.5 Short integration time means that warping during integration is minimized, reducing a major source of blurring. However, under anisoplanatic imaging conditions, there is still temporally and spatially varying warping and blurring to contend with. Note that with long-exposure (LE) images, turbulence-induced image warping gets temporally integrated. While this has the advantage of “averaging out” the geometric warping, to reveal the correct scene geometry, it also leads to high levels of blurring that may be difficult to treat effectively with image restoration.

One important class of turbulence mitigation algorithms is bispectral speckle imaging.616 This method seeks to recover the ideal image in the Fourier domain, by estimating the magnitude and phase spectrum separately. The magnitude spectrum is obtained with an inverse filter, or pseudoinverse filter, based on the LE optical transfer function (OTF). The phase is estimated using properties of the bispectrum.7,9,10,16

Another class of turbulence mitigation algorithms uses some form of dewarping, fusion, and then blind deconvolution.8,1521 Other related methods can also be found in the literature.2228 With most of these methods, a motion-compensated temporal average of video frames is computed first. The motion compensation, prior to temporal averaging, reduces the motion blurring that might otherwise be seen in LE imaging. In the case of a static scene, the true geometry can be revealed with a prototype image obtained with a sufficiently long temporal average (or LE). Input frames can be registered to the prototype to provide turbulent motion compensation. As we shall show, even global frame registration can be of benefit. Performing such registration with a dynamic scene, containing moving objects, presents additional challenges.24,26,29 The current paper limits its scope to static scenes. Fusion is often done next by simple temporal averaging. This reduces noise and averages the spatially and temporally varying speckle PSFs in the individual frames. The result is an image that appears to be blurred with a spatially invariant PSF (with less blurring than an LE PSF). A blind image restoration process is then used to jointly estimate the spatially invariant PSF and true image. Note that using blind deconvolution has its challenges. First, it can be very computationally demanding. Also, unless a significant amount of a priori knowledge is incorporated, the recovered PSF and image may not be accurate.30

Here, we present a block-matching and Wiener filtering (BMWF) approach to atmospheric turbulence mitigation for long-range imaging of extended scenes. We seek to leverage the rich theoretical work on atmospheric turbulence to aid in the design of a practical image restoration algorithm. We evaluate the proposed method, along with some benchmark methods, using simulated and real-image sequences. The simulated data are generated with a simulation tool developed by one of the current authors.31 These data provide objective truth and allow for a quantitative error analysis. The proposed turbulence mitigation method takes a sequence of SE frames of a static scene and outputs a single restored image. The images are globally registered to the temporal average and then reaveraged. This forms our prototype with the approximately correct geometry. A block-matching algorithm (BMA) is used to align the individual input frames to the prototype. We discuss how atmospheric statistics can help in setting the tuning parameters of the BMA. The BMA method here also uses a prefilter on the individual frames, so they better match the power spectrum of the prototype image for improved registration. The BMA registered frames are then averaged to generate a fused image. The final step is deconvolving the fused image using a Wiener filter.

An important aspect of the proposed method lies in how we model the degradation PSF. We use a parametric model that takes into account the level of geometric correction achieved during image registration. This is unlike any method we are aware of in the literature. By matching the PSF to the level of registration in this way, the Wiener filter is able to fully exploit the reduced blurring achieved by registration. We also describe a method for estimating the atmospheric coherence diameter (or Fried parameter) from the same estimated motion vectors used for restoration. We provide a detailed performance analysis that illustrates how the key tuning parameters impact the BMWF system performance. The proposed BMWF method is relatively simple computationally, yet it has excellent performance in comparison with state-of-the-art benchmark methods in our study.

The remainder of this paper is organized as follows. In Sec. 2, we present our observation model. This includes key statistics and the OTF models. The proposed BMWF turbulence mitigation approach is described in Sec. 3. The efficacy of the BMWF turbulence mitigation, in comparison with some benchmark methods, is demonstrated in Sec. 4. Finally, we offer conclusions in Sec. 5.

Atmospheric Turbulence Statistics

One of the most important statistics that can be derived from the widely used Kolmogorov turbulence model is the atmospheric coherence diameter (or Fried parameter).3,32 This is given as Display Formula

r0=[0.423(2πλ)2z=0z=LCn2(z)(zL)5/3dz]3/5,(1)
where λ is the wavelength and Cn2(z) is the refractive index structure parameter profile along the optical path. Note that this expression is for spherical wave propagation, and z is the distance from the source (i.e., z=0 at the source and z=L at the camera). As we will see, this parameter is central to the PSF model needed for deconvolution.

Another very salient statistic is the tilt variance for a point source. This is the angle of arrival variance of a point source due to turbulence. An expression for the one-axis tilt variance, for the spherical wave case, is given as33Display Formula

σφ2=2.91D1/3z=0LCn2(z)(zL)5/3dz,(2)
where D is the aperture diameter and σφ2 is measured in radians squared. Combining Eqs. (1) and (2) and converting the tilt variance into a spatial distance on the focal plane, we obtain the spatial-domain tilt standard deviation as Display Formula
σr=.4175λlr05/6D1/6,(3)
where l is the focal length and σr is measured in units of distance.

Optical Transfer Functions

When imaging in atmospheric turbulence, the overall camera OTF can be modeled to include the atmospheric OTF and the diffraction OTF. This is given as Display Formula

Hα(ρ)=Hatm,α(ρ)Hdif(ρ),(4)
where ρ=u2+v2 and u and v are the spatial frequencies in units of cycles per unit distance. The atmospheric OTF model typically used is given as3Display Formula
Hatm,α(ρ)=exp{3.44(λlρr0)5/3[1α(λlρD)1/3]}.(5)
The parameter α relates to the level of motion blur from tilt variance. More will be said about this shortly. The diffraction-limited OTF for a circular exit pupil is given as34Display Formula
Hdif(ρ)={2π[cos1(ρ2ρc)ρ2ρc1(ρ2ρc)2]ρρc0otherwise,(6)
where ρc=1/(λf/#) is the optical cutoff frequency and the f-number is f/#=l/D. Let us define the LE transfer function, which includes diffraction, as Display Formula
HLE(ρ)=H0(ρ)=Hatm,0(ρ)Hdif(ρ).(7)
Similarly, the SE transfer function is given as Display Formula
HSE(ρ)=H1(ρ)=Hatm,1(ρ)Hdif(ρ).(8)
The above equation is the fully tilt-compensated and time averaged transfer function.3 An alternative SE OTF is given by Charnotskii.35

With the two main transfer functions defined, we now highlight a very interesting and important relationship between them that comes from the original development of Eq. (5). That is, it can be shown that Display Formula

Hatm,α(ρ)=Hatm,1(ρ)Gα(ρ),(9)
where Gα(ρ) is a Gaussian, given as Display Formula
Gα(ρ)=exp{(1α)3.44(λl)2r05/3D1/3ρ2}.(10)
This means that atmospheric OTF from Eq. (5) can be expressed as the SE OTF, multiplied by the Gaussian, Gα(ρ), yielding Display Formula
Hα(ρ)=Hatm,1(ρ)Gα(ρ)Hdif(ρ)=HSE(ρ)Gα(ρ).(11)
In the spatial domain, the functions are also circularly symmetric, so we have Display Formula
hα(r)=hSE(r)*gα(r),(12)
where r=x2+y2 and * is the convolution operator.

Based on the Fourier transform properties of a Gaussian, the spatial-domain function resulting from the inverse Fourier transform gα(r)=FT1[Gα(ρ)] is also Gaussian. This is given as Display Formula

gα(r)=2σg2(α)πexp{r22σg2(α)},(13)
where Display Formula
σg2(α)=(1α)(.4175λlr05/6D1/6)2.(14)
Comparing the above equation with the theoretical tilt standard deviation in Eq. (3), we see that Display Formula
σg2(α)=(1α)σr2.(15)
Alternatively, the standard deviations can be expressed as Display Formula
σg(α)=1ασr=βσr,(16)
where β=1α, or equivalently, α=1β2.

Thus, Eq. (12) shows that the parametric atmospheric PSF, hα(r), is the SE PSF convolved by a Gaussian motion blur impulse response. When α=0 (or equivalently β=1), the Gaussian motion blur standard deviation is the theoretical tilt standard deviation in Eq. (3). That is, σg(α)=σr, and we get the LE PSF Display Formula

hLE(r)=hSE(r)*g0(r).(17)
In the frequency domain, we have Display Formula
HLE(ρ)=HSE(ρ)G0(ρ).(18)
When the motion blur standard deviation is zero (i.e., α=1 or equivalently β=0), Eq. (12) gives the SE PSF. For 0<α<1, Eq. (12) gives us the SE PSF convolved with a Gaussian motion blur somewhere between full tilt compensation and no tilt compensation. In the literature, it is typically the LE OTF (α=0 or β=1) that is used for image restoration. However, if some level of registration is applied to the SE images (even if only global image registration), prior to fusion and deconvolution, we show that better results can be achieved by tuning α to the level tilt motion compensation. This gives us a powerful way to match the deconvolution step with the tilt correction processing step.

Examples of atmospheric PSFs, hα(r), from Eq. (12) are shown in Fig. 1. The optical system parameters corresponding to these plots are from the simulated data used in Sec. 4. The specific parameters are listed in Table 1. Figure 1(a) is for Cn2=0.25×1015  m2/3 (r0=0.1097  m) and Fig. 1(b) is for Cn2=1.00×1015  m2/3 (r0=0.0478  m). Note how the choice of α has a very significant impact on the width of the PSF for both levels of turbulence. As α is reduced, the Gaussian blurring component smoothes out and widens the PSF. We seek to match the α (or equivalently β), used in our PSF model, to the level of tilt correction provided by the registration stage of processing.

Graphic Jump Location
Fig. 1
F1 :

Overall system PSFs with different α for the optical system parameters in Table 1 with (a) Cn2=0.25×1015  m2/3 (r0=0.1097  m) and (b) Cn2=1.00×1015  m2/3 (r0=0.0478  m). Note that as alpha increases, the PSF narrows.

Table Grahic Jump Location
Table 1Optical parameters for simulated data.
Observation Model

Given the information in the preceding subsections, we are now ready to define the model we use to relate observed SE frames to a truth image. Note that the model is similar to that described earlier by Fraser et al.17 The observed frames are expressed in terms of a spatially varying blur operator and a spatially varying geometric warping operator. In particular, observed frame k is given as Display Formula

fk(x,y)=s˜k(x,y){h˜k(x,y)[z(x,y)]}+ηk(x,y),(19)
where x, y are spatial coordinates, k is the temporal frame index, z(x,y) is the ideal image, and ηk(x,y) is an additive noise term. The geometric warping operator is defined such that Display Formula
s˜k(x,y)[z(x,y)]=g0(x,y)*z(x,y),(20)
where · represents a temporal ensemble mean operator. The blurring operator is defined such that Display Formula
h˜k(x,y)[z(x,y)]=hSE(x,y)*z(x,y).(21)
Using this model, note that the ensemble mean of the observed frames is given by Display Formula
fk(x,y)=g0(x,y)*hSE(x,y)*z(x,y)=hLE(x,y)*z(x,y).(22)

Now, consider the case where perfect tilt correction is applied to the SE frames. Let this tilt correction operator be expressed as s˜k1(x,y)[·]. Applying this to Eq. (19) and comparing this to Eq. (21), we get Display Formula

s˜k1(x,y)[fk(x,y)]=h˜k(x,y)[z(x,y)]=hSE(x,y)*z(x,y).(23)
However, in practice, ideal tilt correction may not be possible. One reason for this is that BMA registration requires a finite size block for matching and the actual tilt warping varies continuously. Thus, any block-based estimate will tend to underestimate the true tilt for a given point, by virtue of the spatial averaging effect.36,37 Thus, we define a partial tilt correction operator as s˜k,α1(x,y)[·]. Applying this to the SE frames, and applying an ensemble mean, yields Display Formula
f(x,y)=s˜k,α1(x,y)[fk(x,y)]=gα(x,y)*hSE(x,y)*z(x,y)=hα(x,y)*z(x,y).(24)
This result gives the rational for using hα(x,y) as the degradation blur model for fully or partially tilt corrected imagery. The value of α can be selected based on the expected residual tilt variance after registration, σg2(α). In this context, the variable α, defined by Eq. (15), can be considered a registration tilt-variance reduction factor. Equivalently, the variable β, defined by Eq. (16), can be considered a residual RMS tilt scaling factor.

Block-Matching and Wiener Filtering Turbulence Mitigation

A block diagram representing the proposed BMWF turbulence mitigation algorithm is provided in Fig. 2. The input is a set of N SE frames fk(x,y), for k=1,2,,N. We assume that these frames are sampled such that they are free from aliasing. Treating turbulence and aliasing simultaneously has been explored in the literature25,26,38,39 but it is not addressed here.

Graphic Jump Location
Fig. 2
F2 :

Proposed BMWF turbulence mitigation system block diagram.

The input frames are buffered and averaged. Next, robust global translational registration is used to align the N frames to the average. A least-squares gradient-based registration algorithm is used. This method is based on Lucas and Kanade40 but includes the robust multiscale processing described by Hardie et al.41,42 The frames are reaveraged after this global alignment to produce the prototype image with the desired geometry. This step also gives us the opportunity to compensate for any camera platform motion. For ground-based systems, translations may be sufficient. For airborne applications, affine registration at this stage may be appropriate.41

Next, a BMA algorithm43 is used to estimate the local motion vectors for each pixel within each frame. The images are then interpolated, based on the motion vectors, to match the geometry of the prototype. Note that there is a mismatch between the level of blurring in the raw frames and the protoype being matched. One of the features of our method is that we prefilter the raw frames, using the Gaussian tilt blur, g0.5(x,y) from Eq. (13). Note that this is only done for the purposes of BMA, and we revert to the raw frames for subsequent processing.

As discussed in Sec. 2.3, the registration will not be ideal, and the accuracy of the registration is quantified by the parameter α (or equivalently β). The BMA registered frames are then expressed as s˜k,α1(x,y)[fk(x,y)], as shown in Fig. 2. Let us define the BMA block size as B×B pixels, and let the search window be S×S pixels in size (as defined by the position of the block centers). We use an exhaustive search within the search window, using the full rectangular blocks, and employ the mean absolute difference metric. We present results using a whole pixel search and subpixel search. The subpixel results are obtained by upsampling the images with bicubic interpolation. Because of its widespread use in image compression, much work has been done regarding performance analysis, speed enhancements, and hardware implementations of BMA.43 We leverage that work by incorporating the BMA here. The key parameters for BMA are B and S. If knowledge of the atmospheric coherence diameter, r0, is available, we can predict the amount of motion using Eq. (3). Exploiting this, we employ a search window that includes ±2 standard deviations, giving S=22σr/δN+1 pixels, where δN is the pixel spacing measured on the focal plane.

With regard to block size, the larger these are, the less sensitive the BMA is to noise and warping. However, with increased size, there tends to be an increased underestimation of the true local motion from atmospheric tilt.36,37 Thus, a balance is required. The exact amount of underestimation will depend on the block size, the particular Cn2(z) profile, and optical parameters.36,37 Notwithstanding this, we have found that a fixed block size of B=15  pixels is effective for the range of turbulence conditions used in the simulated imagery. Furthermore, our results show that the corresponding residual RMS tilt factor is approximately a constant β=0.1 in the simulated imagery.

The next step of the BMWF method is to simply average the registered frames, as shown in Fig. 2. This gives rise to the result in Eq. (24). This step is important for two main reasons. First, it reduces noise and reduces the impact of any BMA errors. Second, by averaging the spatially varying blurring, it allows us to accurately model the resulting blur as spatially invariant,17 as shown in Eq. (24). This justifies the use of a spatially invariant deconvolution step. The deconvolution step is implemented here using a Wiener filter. The frequency response of the Wiener44 filter is given as Display Formula

HW(u,v)=Hα(u,v)*|Hα(u,v)|2+Γ,(25)
where Γ represents a constant noise-to-signal (NSR) power spectral density ratio. The output, after applying the Wiener filter, can be expressed as Display Formula
z^(x,y)=FT1[HW(u,v)F(u,v)],(26)
where F(u,v)=FT{f(x,y)} and f(x,y) is given by Eq. (24). Note that FT(·) and FT1(·) represent the Fourier and inverse Fourier transforms, respectively. In practice, we are using sampled images and the fast Fourier transform (FFT) for implementing Eq. (26). Since we are assuming Nyquist sampled images, the property of impulse invariance applies.45 The images are padded symmetrically to minimize ringing artifacts associated with the circular convolution that results from FFT products.

Examples of the atmospheric OTF, Hα(ρ), from Eq. (11), are shown in Fig. 3. The optical system parameters corresponding to these plots are listed in Table 1. Figure 3(a) is for Cn2=0.25×1015  m2/3 (r0=0.1097  m), and Fig. 3(b) is for Cn2=1.00×1015  m2/3 (r0=0.0478  m). Also shown in Fig. 3 are the degradation OTFs multiplied by the Wiener filter transfer function in Eq. (25) for Γ=0.0002 (the value used for the simulated and real data with 200 frames). Clearly, as α approaches 1 (equivalently β approaches 0), the degradation OTF is more favorable to high spatial frequencies. The signal will be above the noise floor out to a higher spatial frequency. Consequently, the Wiener filter is able to provide gain out to a higher spatial frequency, without being overwhelmed with noise. This greatly extends the effective restored system OTF. When the degradation OTF value gets below the noise floor, governed by Γ, the Wiener filter in Eq. (25) succumbs, as shown in Fig. 3. Note that with the illustrated NSR, the effective bandwidth of the sensor is nearly doubled, going from α=0 (no registration or tilt correction) to α=1 (full tilt correction). Matching the degradation model to the level of registration is essential to exploiting the full benefits of the registration.

Graphic Jump Location
Fig. 3
F3 :

Overall system OTFs with different α for the optical system parameters in Table 1 with (a) Cn2=0.25×1015  m2/3 (r0=0.1097  m) and (b) Cn2=1.00×1015  m2/3 (r0=0.0478  m). The system OTFs multiplied by the Wiener filter frequency responses with Γ=0.0002 are also shown to illustrate the effective OTF with restoration. Note that as alpha increases, the OTF widens.

Another important thing to note from Fig. 3 is that the degradation OTF has no zeros up to the optical cutoff frequency. Thus, the blur degradation is theoretically invertible (barring noise and quantization). Given this, and the fact that we often can expect a low NSR because many frames are averaged, the computationally simple Wiener filter tends to give excellent performance. More complex deblurring methods may be warranted when additional ill-posed blurring functions and/or very high levels of noise are present. However, we observed negligible performance gains using more complex regularization-based image restoration methods in our experiments.

Estimating the Atmospheric Coherence Diameter

To define the degradation OTF in Eq. (11) and the corresponding Wiener filter in Eq. (25), we require the parameter r0. This can be measured using a scintillometer. However, in most practical imaging scenarios, it will not be known. Estimating this parameter from observed imagery is an active area of research.30,36,37 In some applications, it may be possible to set this parameter manually, based on subjective evaluation of the restoration results.

Here, we propose a method for estimating r0 from the BMA motion vectors used in the BMWF algorithm. Based on Eq. (3), it is clear that r0 is directly related to the warping motion in the observed imagery. The BMA motion vectors can give us an estimate of the warping motion. However, note that it is important that we exclude any camera platform motion or within scene motion when doing this. If the residual RMS tilt is βσr from Eq. (16), the reduction in tilt due to registration is (1β)σr. Let the BMA single-axis RMS motion, converted from pixels to distance on the focal plane, be denoted as σBMA. Now we can estimate σr as σ^r=σBMA/(1β). Using this and Eq. (3), we obtain an estimate of the atmospheric coherence diameter as Display Formula

r^0=[.4175(1β)λlσBMAD1/6]6/5.(27)

In this section, we present a number of experimental results. These include results for both simulated and real SE image sequences. We also provide a comparison with state-of-the art benchmark methods. One benchmark is the bispectral speckle imaging method.7,9,16 Our implementation uses apodization with 16×16  pixel tiles9 and incorporates local registration alignment with each tile, as this gave the best bispectrum method performance. Another benchmark is the method of Zhu and Milanfar.21 Our results for this method come from publicly available MATLAB® code, provided courtesy of the authors. They use a B-spline nonrigid image registration (NRIR) of SE frames. This is followed by temporal regression to produce what the authors refer to as a near-diffraction-limited (NDL) image. Zhu and Milanfar21 suggest that blind deconvolution be applied to the NDL image. However, blind deconvolution code is not provided by those authors. Here, we have exact knowledge of the diffraction PSF, and, therefore, we apply a parameter-optimized Wiener filter to deconvolve diffraction-only blur from the NDL image. We also compute the temporal average of the NRIR frames as an alternative (bypassing the NDL regression operation), as an additional comparison.

Simulated Data

The simulated data are generated using an anisoplanatic simulation tool described by Hardie et al.31 The optical parameters used are listed in Table 1, and the simulation parameters are listed in Table 2. Five different levels of turbulence are simulated, and some statistical parameters for these scenarios are listed in Table 3. Additive-independent Gaussian noise, with a standard deviation of ση=2 digital units, is added to each simulated frame. The metric we use to evaluate the simulated data results is peak signal-to-noise-ratio (PSNR). Our first set of results is for N=200 temporally independent frames; then, we show results for N=30 frames.

Table Grahic Jump Location
Table 2Simulation parameters used in generating simulated frames.31
Table Grahic Jump Location
Table 3Theoretical statistical parameters for the different levels of simulated atmospheric turbulence and related restoration parameters.
Results using 200 simulated frames

The PSNR results using 200 temporally independent frames are reported in Table 4. For the BMA algorithm, the search window size is set to S=22σr/δN+1  pixels. The block size is a constant B=15  pixels. We report results for both whole pixel BMA and subpixel BMA in Table 4. We use the theoretical r0 for each sequence for our OTF model. For the Wiener filter, we also report two sets of results. One where the optimum Γ and β are searched for and used and another where fixed parameters are employed. The fixed NSR is Γ=ση2/(100N), where N is the number of frames. The fixed residual tilt factors are: β=1.0 (α=0.00) for the Wiener filter applied to the raw frame average (i.e., LE PSF), β=0.5 (α=0.75) for the Wiener filter applied to the global registration average, and β=0.1 (α=0.99) for the Wiener filter applied to BMA registered average. It is interesting to see in Table 4 how the PSNR increases by incorporating different levels of registration before averaging. As might be expected, the highest PSNR values are obtained with the subpixel BMA registration. It is also clear that there is a big boost in performance by adding the Wiener filter. The best results in Table 4 are generally from the subpixel BMA + average + Wiener filter.

Table Grahic Jump Location
Table 4PSNR (dB) results using 200 frames of simulated data with ση=2.0.
Table Footer NoteThe bold entries represent the highest PSNR for that level of turbulence.

An analysis of the Wiener filter performance, as a function of Γ and β, is provided in Fig. 4 for N=200 and Cn2=1.00×1015  m2/3. Figure 4(a) shows the PSNR surface for the Wiener filter applied to the BMA registered frame average. Note that the optimum β is near 0.1 and the optimum Γ is near 0.0002. This suggests that the BMA registration is about 90% effective in eliminating the RMS tilt (and 10% remains). A similar surface plot is provided in Fig. 4(b) for the globally registered frame average (i.e., no BMA). Here, the optimum β is near 0.5, suggesting that the global registration is about 50% effective in eliminating the RMS tilt. Finally, Fig. 4(c) is for no registration at all. Here, the optimum β is approaching 1.0, as would be expected for an LE image. This analysis shows that the β parameter should be matched to the level of tilt correction provided by the registration.

Graphic Jump Location
Fig. 4
F4 :

Wiener filter parameter optimization using 200 simulated frames with Cn2=1.00×1015  m2/3. Wiener filter operating on: (a) the BMA registered frame average, (b) globally registered frame average, and (c) unregistered frame average. The highest PSNR is marked with a red “x.”

An analysis of the BMA parameters is shown in Fig. 5 for N=200 and Cn2=1.00×1015  m2/3. In particular, this plot shows the PSNR values as a function of B and S. Here, one can see that the optimum block size is near B=15, and the optimum search window size does not increase after S=11. It is clear that small block sizes give much lower PSNRs. This is likely due to an insufficient amount of information for accurate matching, given the atmospheric degradations. Also, one can see that larger search windows generally do not hurt performance, but they do add to the computational load.

Graphic Jump Location
Fig. 5
F5 :

BMA parameter optimization using 200 simulated frames with Cn2=1.00×1015  m2/3.

Figure 6 shows the system PSNR as a function of the number of input frames for Cn2=1.00×1015  m2/3. Performance increases dramatically for the first 30 frames or so and then becomes more incremental. However, additional frames continue to improve performance. Note that the curve is not monotonically increasing. The drops are likely due to the introduction of frames with large shifts, relative to the truth image.

Graphic Jump Location
Fig. 6
F6 :

Restoration performance versus the number of input frames for Cn2=1.00×1015  m2/3.

Estimation of the atmospheric coherence diameter is illustrated in Fig. 7. The continuous curve shows the relationship from Eq. (3). The five simulation turbulence levels are shown with blue circles. The red squares show the estimated parameters from Eq. (27), using the BMA motion vectors with the parameters in Table 3, and β=0.1. This result appears to show a promising level of agreement between the estimates and true r0 values.

Graphic Jump Location
Fig. 7
F7 :

Simulation results for the atmospheric coherence diameter (Fried parameter) estimation from the BMA motion vectors using Eq. (27). Here, the N=200 simulated frames are used at the five Cn2 levels. The same BMA parameters are used as shown in Table 3 and β=0.1.

Let us now turn our attention to image results. The truth image is shown in Fig. 8. Several output images, formed using N=200 and Cn2=0.25×1015  m2/3, are shown in Fig. 9. Figure 9(a) shows a single raw frame. Figure 9(b) shows the temporal frame average with no registration. The subpixel BMA registered frame average is shown in Fig. 9(c). Finally, the subpixel BMA + average + Wiener filter output is shown in Fig. 9(d). Here, the fixed-parameter Wiener filter is used. Note that the temporal average in Fig. 9(b) is rather blurry, as it is effectively equivalent to the true image corrupted with the LE PSF. The BMA registered average has corrected geometry and a blur level that is comparable to the observed SE frames. We see that by matching the PSF to the BMA registered average excellent results are possible, as shown in Fig. 9(d).

Graphic Jump Location
Fig. 8
F8 :

Truth image used for simulated image sequence.

Graphic Jump Location
Fig. 9
F9 :

Restoration results using N=200 frames with Cn2=0.25×1015  m2/3. (a) Raw frame 1, (b) raw frame average (no registration), (c) BMA registered frame average, and (d) BMA (sub) + average + Wiener filter output. Video 1 (MOV, 824 KB [URL: http://dx.doi.org/10.1117/1.OE.56.7.071503.1]) shows the raw frames on the left and BMA on the right.

A similar set of results is shown in Fig. 10. These images are the same as Fig. 9, except we have increased the turbulence to Cn2=1.00×1015  m2/3. The raw SE frame is noticeably more corrupted than in Fig. 9. Also, the blurring in Figs. 10(b) and 10(c) is far more pronounced than in the corresponding images in Fig. 9. However, despite the increased turbulence, the subpixel BMA + average + Wiener output in Fig. 10(d) maintains much of the original detail. This is a consequence of having a very high signal-to-noise ratio, by virtue of the large number of input frames, and of an effective match between the PSF model and the blur in BMA registered average.

Graphic Jump Location
Fig. 10
F10 :

Restoration results using N=200 frames with Cn2=1.00×1015  m2/3. (a) Raw frame 1, (b) raw frame average (no registration), (c) BMA registered frame average, and (b) BMA (sub) + average + Wiener filter output. Video 2 (MOV, 816 KB [URL: http://dx.doi.org/10.1117/1.OE.56.7.071503.2]) shows the raw frames on the left and BMA on the right.

Results using 30 simulated frames

The next set of results is for the restoration methods using N=30 input frames. The quantitative PSNR results are shown in Table 5. The results are similar to those in Table 4, but as expected, with fewer frames the PSNR values drop somewhat. The best results in Table 5 are for the subpixel BMA + average + Wiener filter, and these results are significantly better than those of the benchmark methods.

Table Grahic Jump Location
Table 5PSNR (dB) results using 30 frames of simulated data with ση=2.0.
Table Footer NoteThe bold entries represent the highest PSNR for that level of turbulence.

To allow for a subjective comparison of the proposed method and the benchmark methods, output images from several methods are shown in Fig. 11 for the N=30 frame input, with Cn2=1.00×1015  m2/3. Figure 11(a) shows the temporal average, followed by the Wiener filter, using the LE PSF model. Figure 11(b) shows the NRIR + NDL image from Zhu and Milanfar,21 followed by the Wiener filter using the diffraction-only PSF model. Figure 11(c) shows the bispectral speckle image output.7,9,16 Finally, Fig. 11(d) shows the BMA + average + Wiener filter output, with subpixel BMA and fixed-parameter Wiener filter. The result in Fig. 11(a) is limited because no tilt correction is used, and the LE PSF is used in the Wiener filter. The NRIR + NDL + Wiener filter image in Fig. 11(b) provides improved results, but some areas remain highly blurred and there appear to be some artifacts at the edges. The bispectrum output in Fig. 11(c) also looks better than the LE restoration in (a) but is fundamentally limited by its use of the LE PSF model in recovering the magnitude frequency spectrum. The bispectrum method also tends to suffer from tiling artifacts when treating high levels of turbulence, as can be seen in Fig. 11(c). Processing without using tiles eliminates the tiling artifacts but leads to a lower quantitative performance (hence the use of tiles here). The subpixel BMA + average + Wiener filter output in Fig. 11(d) appears to have the best overall detail, with no major artifacts. This is supported by the quantitative analysis in Table 5.

Graphic Jump Location
Fig. 11
F11 :

Restoration results using 30 frames with Cn2=1.00×1015  m2/3. (a) Raw frame average (no registration) + Wiener filter, (b) NRIR + NDL21 + Wiener (diffraction), (c) bispectral speckle imaging method,7,9,16 and (d) BMA (sub) + average + Wiener filter output.

The processing time for the various algorithms and their components is provided in Table 6. Note that the proposed method has a significantly shorter run time than the benchmark methods using our MATLAB® implementations. However, run times with other implementations may differ. For the bispectral imaging method, processing time can be reduced by reducing the number of tiles and eliminating tile-based registration. Furthermore, hardware acceleration can be employed to speed up the multidimensional FFTs used with this method.

Table Grahic Jump Location
Table 6Algorithm run times for processing 30 simulated 257×257  pixel frames to produce a single output frame. Processing was done with MATLAB® 2016a using a PC with Intel(R) Xeon(R) CPU E5-2620 v3 at 2.40 GHz, 16 GB RAM, and running Windows 10. For the BMA method S=11, B=15, and 3× subpixel BMA.
Real Data

Our final set of experimental results uses a real-image sequence acquired from a tower to a truck and an engineering resolution target at a distance of 5 km. The resolution target is made up of a sequence of vertical and horizontal three-line groups. The five large groups on the right side have bars with the following widths: 7.00, 6.24, 5.56, 4.95, and 4.91 cm. The optical parameters for this sensor are listed in Table 7. The sensor sampling is very close to Nyquist, so the Wiener filter is evaluated and implemented at the pixel pitch of the sensor (i.e., no resampling of the imagery is performed). A scintillometer is used to provide an estimate of r0, as shown in Table 7. This value has been confirmed by analysis of an edge target, imaged within the larger field of view of the camera. Assuming a constant Cn2(z) profile, note that the isoplanatic angle, when converted to pixels, is only 0.25 pixels. This gives rise to warping that is highly uncorrelated at a small scale. This makes BMA registration somewhat less effective than we saw in the simulated data. For this reason, we have chosen to use a residual RMS tilt factor of β=0.4 (compared with β=0.1 for the simulated data). An estimate of the atmospheric coherence diameter using Eq. (27), for β=0.4, is shown in Table 7.

Table Grahic Jump Location
Table 7Optical parameters for the real sensor data.

The image results using the real data are shown in Fig. 12. Figure 12(a) shows raw frame 1. The NRIR + NDL21 + Wiener filter output using N=30 input frames is shown in Fig. 12(b). The bispectrum output7,9,16 is shown in Fig. 12(c), also for N=30 and using the scintillometer r0. The 30 frame average + Wiener filter using the LE PSF with scintillometer r0 is shown in Fig. 12(d). The subpixel BMA + average + Wiener output is shown for N=30 and N=200 in Figs. 12(e) and 12(f), respectively. Here, we use the r^0 estimated from the BMA, with β=0.4 and Γ=0.0002. Note that the results obtained using the scintillometer r0 are very similar. The BMWF results appear to provide the best overall subjective quality and recover the resolution target lines notably better than the benchmark methods. We attribute this to a reduction in tilt blurring, by means of the BMA registration, and by the proper matching of the PSF model to the residual tilt blurring.

Graphic Jump Location
Fig. 12
F12 :

Restoration results using real-image sequence. (a) First raw frame, (b) 30 frame NRIR + NDL21 + Wiener (diffraction) output, (c) 30 frame bispectral speckle imaging output,7,9,16 (d) 30 frame average + Wiener filter output, (e) 30 frame BMA (sub) + average + Wiener filter output, and (f) 200 frame BMA (sub) + average + Wiener filter output. Video 3 (MOV, 507 KB [URL: http://dx.doi.org/10.1117/1.OE.56.7.071503.3]) shows the raw frames on the top and BMA on the bottom.

We have presented a block-matching and Wiener filter-based approach to optical turbulence mitigation. In addition to the restoration method, we have also presented a method for estimating the atmospheric coherence diameter from the BMA motion vectors. We demonstrate the efficacy of this method quantitatively, using simulated data from a simulation tool developed by one of the authors. Results using real data are also provided for evaluation. The proposed restoration method utilizes a parametric OTF model for atmospheric turbulence and diffraction that incorporates the level of tilt correction provided by the registration step. By matching the PSF model to the level of registration, improved results are possible, as shown in Fig. 4. For the BMA component of our algorithm, we present a few innovations. For example, we use a search window size determined by the theoretical RMS tilt, when r0 is available. The BMA also uses a prefilter on the raw frames, so they better match the prototype in spatial frequency content. Compared with benchmark methods, the proposed method provides the highest PSNR restorations in our study, as shown in Tables 4 and 5.

We quantify the level of registration tilt correction by what we term the residual RMS tilt factor, β, or equivalently, the tilt variance reduction factor, α=1β2. Recall that β is such that the residual RMS tilt, after registration, is βσr, where σr is the theoretical uncorrected RMS tilt given in Eq. (3). Given α and r0 and the optical system parameters, the degradation OTF model is given by Eq. (11) and the Wiener filter is given by Eq. (25). We have demonstrated that α can have a significant impact on the degradation OTF and corresponding restored image OTF, as shown in Figs. 1 and 3, respectively.

In cases where r0 is known, it is possible to infer β from Eq. (27), using the BMA motion vectors. If one does not have knowledge of r0, it can be estimated from Eq. (27) using an assumed β and the BMA motion vectors. Thus, a complete restoration can be achieved with the assumption of only one unknown parameter, β (or α). Note that this parameter is linked to the size of the BMA block size B, along with the camera parameters, and the Cn2(z) profile. We achieved excellent results using a constant B=15  pixels, assuming a corresponding β=0.1 for the simulated data and β=0.4 for the real data studied here. In practice, it may be possible to perform a search over β and evaluate the results subjectively or by some other metric.

The authors would like to thank Dr. Doug Droege at L-3 Communications Cincinnati Electronics for providing support for this project. We would like to also thank Matthew D. Howard at AFRL for assisting with data collection. This work has been supported in part by funding from L-3 Communications Cincinnati Electronics and under AFRL Award Nos. FA8650-10-2-7028 and FA9550-14-1-0244. It has been approved for public release (PA Approval # 88ABW-2016-4934).

Hufnagel  R. E., and Stanley  N. R., “Modulation transfer function associated with image transmission through turbulent media,” J. Opt. Soc. Am.. 54, , 52 –61 (1964). 0030-3941 CrossRef
Fried  D. L., “Optical resolution through a randomly inhomogeneous medium for very long and very short exposures,” J. Opt. Soc. Am.. 56, , 1372  (1966). 0030-3941 CrossRef
Roggemann  M., and Welsh  B., Imaging Through Turbulence. ,  CRC Press ,  Boca Raton, Florida  (1996).
Tyson  R., Principles of Adaptive Optics. ,  Academic Press ,  Boston, Massachusetts  (1998).
van Eekeren  A. W. M.  et al., “Turbulence compensation: an overview,” Proc. SPIE. 8355, , 83550Q  (2012). 0277-786X CrossRef
Lawrence  T. W.  et al., “Experimental validation of extended image reconstruction using bispectral speckle interferometry,” Proc. SPIE. 1237, , 522 –537 (1990). 0277-786X CrossRef
Lawrence  T. W.  et al., “Extended-image reconstruction through horizontal path turbulence using bispectral speckle interferometry,” Opt. Eng.. 31, (3 ), 627 –636 (1992).CrossRef
Matson  C. L.  et al., “Multiframe blind deconvolution and bispectrum processing of atmospherically degraded data: a comparison,” Proc. SPIE. 4792, , 55 –66 (2002). 0277-786X CrossRef
Carrano  C. J., “Speckle imaging over horizontal paths,” Proc. SPIE. 4825, , 109 –120 (2002). 0277-786X CrossRef
Carrano  C. J., “Anisoplanatic performance of horizontal-path speckle imaging,” Proc. SPIE. 5162, , 14 –27 (2003). 0277-786X CrossRef
Bos  J. P., and Roggemann  M. C., “Mean squared error performance of speckle-imaging using the bispectrum in horizontal imaging applications,” Proc. SPIE. 8056, , 805603  (2011). 0277-786X CrossRef
Bos  J. P., and Roggeman  M. C., “The effect of free parameter estimates on the reconstruction of turbulence corrupted images using the bispectrum,” Proc. SPIE. 8161, , 816105  (2011). 0277-786X CrossRef
Bos  J. P., and Roggemann  M. C., “Robustness of speckle-imaging techniques applied to horizontal imaging scenarios,” Opt. Eng.. 51, (8 ), 083201  (2012).CrossRef
Bos  J. P., and Roggemann  M. C., “Blind image quality metrics for optimal speckle image reconstruction in horizontal imaging scenarios,” Opt. Eng.. 51, (10 ), 107003  (2012).CrossRef
Archer  G. E., , Bos  J. P., and Roggemann  M. C., “Reconstruction of long horizontal-path images under anisoplanatic conditions using multiframe blind deconvolution,” Opt. Eng.. 52, (8 ), 083108  (2013).CrossRef
Archer  G. E., , Bos  J. P., and Roggemann  M. C., “Comparison of bispectrum, multiframe blind deconvolution and hybrid bispectrum-multiframe blind deconvolution image reconstruction techniques for anisoplanatic, long horizontal-path imaging,” Opt. Eng.. 53, (4 ), 043109  (2014).CrossRef
Fraser  D., , Thorpe  G., and Lambert  A., “Atmospheric turbulence visualization with wide-area motion-blur restoration,” J. Opt. Soc. Am. A. 16, , 1751 –1758 (1999). 0740-3232 CrossRef
Frakes  D. H., , Monaco  J. W., and Smith  M. J. T., “Suppression of atmospheric turbulence in video using an adaptive control grid interpolation approach,” in  IEEE Int. Conf. on Acoustics, Speech, and Signal Processing (ICASSP 2001) , vol. 3, pp. 1881 –1884 (2001).
Li  D., , Mersereau  R. M., and Simske  S., “Atmospheric turbulence-degraded image restoration using principal components analysis,” IEEE Geosci. Remote Sens. Lett.. 4, , 340 –344 (2007).CrossRef
Zhu  X., and Milanfar  P., “Image reconstruction from videos distorted by atmospheric turbulence,” Proc. SPIE. 7543, , 75430S  (2010). 0277-786X CrossRef
Zhu  X., and Milanfar  P., “Removing atmospheric turbulence via space-invariant deconvolution,” IEEE Trans. Pattern Anal. Mach. Intell.. 35, , 157 –170 (2013). 0162-8828 CrossRef
Hardie  R. C., and Droege  D. R., “Atmospheric turbulence correction for infrared video,” in  Proc. of the Military Sensing Symposia (MSS), Passive Sensors ,  Orlando, Florida  (2009).
Hardie  R. C., , Droege  D. R., and Hardin  K. M., “Real-time atmospheric turbulence correction for complex imaging conditions,” in  Proc. of the Military Sensing Symposia (MSS), Passive Sensors ,  Orlando, Florida  (2010).
Hardie  R. C., , Droege  D. R., and Hardin  K. M., “Real-time atmospheric turbulence with moving objects,” in  Proc. of the Military Sensing Symposia (MSS), Passive Sensors ,  Orlando, Florida  (2011).
Hardie  R. C.  et al., “Real-time video processing for simultaneous atmospheric turbulence mitigation and super-resolution and its application to terrestrial and airborne infrared imaging,” in  Proc. of the Military Sensing Symposia (MSS), Passive Sensors ,  Pasadena, California  (2012).
Droege  D. R.  et al., “A real-time atmospheric turbulence mitigation and super-resolution solution for infrared imaging systems,” Proc. SPIE. 8355, , 83550R  (2012). 0277-786X CrossRef
van Eekeren  A. W. M.  et al., “Patch-based local turbulence compensation in anisoplanatic conditions,” Proc. SPIE. 8355, , 83550T  (2012). 0277-786X CrossRef
van Eekeren  A. W. M., , Dijk  J., and Schutte  K., “Turbulence mitigation methods and their evaluation,” Proc. SPIE. 9249, , 92490O  (2014). 0277-786X CrossRef
Halder  K. K., , Tahtali  M., and Anavatti  S. G., “Geometric correction of atmospheric turbulence-degraded video containing moving objects,” Opt. Express. 23, , 5091 –5101 (2015). 1094-4087 CrossRef
Molina-Martel  F., , Baena-Gall  R., and Gladysz  S., “Fast PSF estimation under anisoplanatic conditions,” Proc. SPIE. 9641, , 96410I  (2015). 0277-786X CrossRef
Hardie  R. C.  et al., “Simulation of anisoplanatic imaging through optical turbulence using numerical wave propagation,” Opt. Eng.. (2017) (to appear).
Schmidt  J. D., Numerical Simulation of Optical Wave Propagation with Examples in MATLAB. , SPIE Press Monograph,  SPIE Press ,  Bellingham, Washington  (2010).
Smith  F. G., The Infrared and Electro-Optical Systems Handbook: Volume 2 Atmospheric Propagation of Radiation. ,  SPIE Press ,  Bellingham, Washington  (1993).
Goodman  J. W., Introduction to Fourier Optics. , 3rd ed.,  Roberts and Company Publishers ,  Greenwood Village, Colorado  (2004).
Charnotskii  M. I., “Anisoplanatic short-exposure imaging in turbulence,” J. Opt. Soc. Am. A. 10, , 492 –501 (1993). 0740-3232 CrossRef
Basu  S., , McCrae  J. E., and Fiorino  S. T., “Estimation of the path-averaged atmospheric refractive index structure constant from time-lapse imagery,” Proc. SPIE. 9465, , 94650T  (2015). 0277-786X CrossRef
McCrae  J. E., , Basu  S., and Fiorino  S. T., “Estimation of atmospheric parameters from time-lapse imagery,” Proc. SPIE. 9833, , 983303  (2016). 0277-786X CrossRef
Yaroslavsky  L.  et al., “Superresolution in turbulent videos: making profit from damage,” Opt. Lett.. 32, , 3038 –3040 (2007). 0146-9592 CrossRef
Yaroslavsky  L. P.  et al., “Super-resolution of turbulent video: potentials and limitations,” Proc. SPIE. 6812, , 681205  (2008). 0277-786X CrossRef
Lucas  B. D., and Kanade  T., “An iterative image registration technique with an application to stereo vision,” in  Int. Joint Conf. on Artificial Intelligence ,  Vancouver  (1981).
Hardie  R. C., , Barnard  K. J., and Ordonez  R., “Fast super-resolution with affine motion using an adaptive Wiener filter and its application to airborne imaging,” Opt. Express. 19, , 26208 –26231 (2011). 1094-4087 CrossRef
Hardie  R. C., and Barnard  K. J., “Fast super-resolution using an adaptive Wiener filter with robustness to local motion,” Opt. Express. 20, , 21053 –21073 (2012). 1094-4087 CrossRef
Huang  Y.-W.  et al., “Survey on block matching motion estimation algorithms and architectures with new results,” J. VLSI Signal Process. Syst.. 42, , 297 –320 (2006).CrossRef
Gonzalez  R. C., and Woods  R. E., Digital Image Processing. , 3rd ed.,  Prentice-Hall, Inc. ,  Upper Saddle River, New Jersey  (2006).
Oppenheim  A. V., and Schafer  R. W., Discrete-Time Signal Processing. , 3rd ed.,  Prentice-Hall, Inc. ,  Upper Saddle River, New Jersey  (2010).

Russell C. Hardie is a full professor in the Department of Electrical and Computer Engineering, the University of Dayton, with a joint appointment in the Department of Electro-Optics. He received the University of Dayton’s top university-wide teaching award, the 2006 Alumni Award in teaching. He received the Rudolf Kingslake Medal and Prize from SPIE in 1998 for super-resolution research. In 1999, he received the School of Engineering Award of Excellence in teaching and the first annual Professor of the Year Award in 2002 from the student chapter of the IEEE.

Michael A. Rucci is a research engineer at the Air Force Research Laboratory, Wright-Patterson AFB, Ohio. His current research includes day/night passive imaging, turbulence modeling and simulation, and image processing. He received his MS and BS degrees in electrical engineering from the University of Dayton in 2014 and 2012, respectively.

Alexander J. Dapore is a senior image processing engineer at L-3 Cincinnati Electronics. He received his BSEE and MSEE from the University of Illinois at Urbana–Champaign in 2008 and 2010, respectively. He has worked on research and development projects in many areas of digital image processing. His specific areas of interest are image restoration, image enhancement, object/threat detection and tracking, multiview computer vision, and the real-time implementation of digital image processing algorithms on image processing algorithms using general-purpose computing on graphics processing units.

Barry K. Karch is a principal research electronics engineer at the Multispectral Sensing and Detection Division, Sensors Directorate of the Air Force Research Laboratory, Wright-Patterson AFB, Ohio. He received his BS degree in electrical engineering in 1987, his MS degree in electro-optics and electrical engineering in 1992 and 1994, respectively, and his PhD in electrical engineering in 2015 from the University of Dayton, Dayton, Ohio. He has worked in the areas of EO/IR remote sensor system and processing development for 29 years.

© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.

Citation

Russell C. Hardie ; Michael A. Rucci ; Alexander J. Dapore and Barry K. Karch
"Block matching and Wiener filtering approach to optical turbulence mitigation and its application to simulated and real imagery with quantitative error analysis", Opt. Eng. 56(7), 071503 (Feb 06, 2017). ; http://dx.doi.org/10.1117/1.OE.56.7.071503


Figures

Graphic Jump Location
Fig. 2
F2 :

Proposed BMWF turbulence mitigation system block diagram.

Graphic Jump Location
Fig. 1
F1 :

Overall system PSFs with different α for the optical system parameters in Table 1 with (a) Cn2=0.25×1015  m2/3 (r0=0.1097  m) and (b) Cn2=1.00×1015  m2/3 (r0=0.0478  m). Note that as alpha increases, the PSF narrows.

Graphic Jump Location
Fig. 4
F4 :

Wiener filter parameter optimization using 200 simulated frames with Cn2=1.00×1015  m2/3. Wiener filter operating on: (a) the BMA registered frame average, (b) globally registered frame average, and (c) unregistered frame average. The highest PSNR is marked with a red “x.”

Graphic Jump Location
Fig. 5
F5 :

BMA parameter optimization using 200 simulated frames with Cn2=1.00×1015  m2/3.

Graphic Jump Location
Fig. 3
F3 :

Overall system OTFs with different α for the optical system parameters in Table 1 with (a) Cn2=0.25×1015  m2/3 (r0=0.1097  m) and (b) Cn2=1.00×1015  m2/3 (r0=0.0478  m). The system OTFs multiplied by the Wiener filter frequency responses with Γ=0.0002 are also shown to illustrate the effective OTF with restoration. Note that as alpha increases, the OTF widens.

Graphic Jump Location
Fig. 7
F7 :

Simulation results for the atmospheric coherence diameter (Fried parameter) estimation from the BMA motion vectors using Eq. (27). Here, the N=200 simulated frames are used at the five Cn2 levels. The same BMA parameters are used as shown in Table 3 and β=0.1.

Graphic Jump Location
Fig. 8
F8 :

Truth image used for simulated image sequence.

Graphic Jump Location
Fig. 9
F9 :

Restoration results using N=200 frames with Cn2=0.25×1015  m2/3. (a) Raw frame 1, (b) raw frame average (no registration), (c) BMA registered frame average, and (d) BMA (sub) + average + Wiener filter output. Video 1 (MOV, 824 KB [URL: http://dx.doi.org/10.1117/1.OE.56.7.071503.1]) shows the raw frames on the left and BMA on the right.

Graphic Jump Location
Fig. 10
F10 :

Restoration results using N=200 frames with Cn2=1.00×1015  m2/3. (a) Raw frame 1, (b) raw frame average (no registration), (c) BMA registered frame average, and (b) BMA (sub) + average + Wiener filter output. Video 2 (MOV, 816 KB [URL: http://dx.doi.org/10.1117/1.OE.56.7.071503.2]) shows the raw frames on the left and BMA on the right.

Graphic Jump Location
Fig. 11
F11 :

Restoration results using 30 frames with Cn2=1.00×1015  m2/3. (a) Raw frame average (no registration) + Wiener filter, (b) NRIR + NDL21 + Wiener (diffraction), (c) bispectral speckle imaging method,7,9,16 and (d) BMA (sub) + average + Wiener filter output.

Graphic Jump Location
Fig. 6
F6 :

Restoration performance versus the number of input frames for Cn2=1.00×1015  m2/3.

Graphic Jump Location
Fig. 12
F12 :

Restoration results using real-image sequence. (a) First raw frame, (b) 30 frame NRIR + NDL21 + Wiener (diffraction) output, (c) 30 frame bispectral speckle imaging output,7,9,16 (d) 30 frame average + Wiener filter output, (e) 30 frame BMA (sub) + average + Wiener filter output, and (f) 200 frame BMA (sub) + average + Wiener filter output. Video 3 (MOV, 507 KB [URL: http://dx.doi.org/10.1117/1.OE.56.7.071503.3]) shows the raw frames on the top and BMA on the bottom.

Tables

Table Grahic Jump Location
Table 1Optical parameters for simulated data.
Table Grahic Jump Location
Table 2Simulation parameters used in generating simulated frames.31
Table Grahic Jump Location
Table 3Theoretical statistical parameters for the different levels of simulated atmospheric turbulence and related restoration parameters.
Table Grahic Jump Location
Table 5PSNR (dB) results using 30 frames of simulated data with ση=2.0.
Table Footer NoteThe bold entries represent the highest PSNR for that level of turbulence.
Table Grahic Jump Location
Table 4PSNR (dB) results using 200 frames of simulated data with ση=2.0.
Table Footer NoteThe bold entries represent the highest PSNR for that level of turbulence.
Table Grahic Jump Location
Table 6Algorithm run times for processing 30 simulated 257×257  pixel frames to produce a single output frame. Processing was done with MATLAB® 2016a using a PC with Intel(R) Xeon(R) CPU E5-2620 v3 at 2.40 GHz, 16 GB RAM, and running Windows 10. For the BMA method S=11, B=15, and 3× subpixel BMA.
Table Grahic Jump Location
Table 7Optical parameters for the real sensor data.

References

Hufnagel  R. E., and Stanley  N. R., “Modulation transfer function associated with image transmission through turbulent media,” J. Opt. Soc. Am.. 54, , 52 –61 (1964). 0030-3941 CrossRef
Fried  D. L., “Optical resolution through a randomly inhomogeneous medium for very long and very short exposures,” J. Opt. Soc. Am.. 56, , 1372  (1966). 0030-3941 CrossRef
Roggemann  M., and Welsh  B., Imaging Through Turbulence. ,  CRC Press ,  Boca Raton, Florida  (1996).
Tyson  R., Principles of Adaptive Optics. ,  Academic Press ,  Boston, Massachusetts  (1998).
van Eekeren  A. W. M.  et al., “Turbulence compensation: an overview,” Proc. SPIE. 8355, , 83550Q  (2012). 0277-786X CrossRef
Lawrence  T. W.  et al., “Experimental validation of extended image reconstruction using bispectral speckle interferometry,” Proc. SPIE. 1237, , 522 –537 (1990). 0277-786X CrossRef
Lawrence  T. W.  et al., “Extended-image reconstruction through horizontal path turbulence using bispectral speckle interferometry,” Opt. Eng.. 31, (3 ), 627 –636 (1992).CrossRef
Matson  C. L.  et al., “Multiframe blind deconvolution and bispectrum processing of atmospherically degraded data: a comparison,” Proc. SPIE. 4792, , 55 –66 (2002). 0277-786X CrossRef
Carrano  C. J., “Speckle imaging over horizontal paths,” Proc. SPIE. 4825, , 109 –120 (2002). 0277-786X CrossRef
Carrano  C. J., “Anisoplanatic performance of horizontal-path speckle imaging,” Proc. SPIE. 5162, , 14 –27 (2003). 0277-786X CrossRef
Bos  J. P., and Roggemann  M. C., “Mean squared error performance of speckle-imaging using the bispectrum in horizontal imaging applications,” Proc. SPIE. 8056, , 805603  (2011). 0277-786X CrossRef
Bos  J. P., and Roggeman  M. C., “The effect of free parameter estimates on the reconstruction of turbulence corrupted images using the bispectrum,” Proc. SPIE. 8161, , 816105  (2011). 0277-786X CrossRef
Bos  J. P., and Roggemann  M. C., “Robustness of speckle-imaging techniques applied to horizontal imaging scenarios,” Opt. Eng.. 51, (8 ), 083201  (2012).CrossRef
Bos  J. P., and Roggemann  M. C., “Blind image quality metrics for optimal speckle image reconstruction in horizontal imaging scenarios,” Opt. Eng.. 51, (10 ), 107003  (2012).CrossRef
Archer  G. E., , Bos  J. P., and Roggemann  M. C., “Reconstruction of long horizontal-path images under anisoplanatic conditions using multiframe blind deconvolution,” Opt. Eng.. 52, (8 ), 083108  (2013).CrossRef
Archer  G. E., , Bos  J. P., and Roggemann  M. C., “Comparison of bispectrum, multiframe blind deconvolution and hybrid bispectrum-multiframe blind deconvolution image reconstruction techniques for anisoplanatic, long horizontal-path imaging,” Opt. Eng.. 53, (4 ), 043109  (2014).CrossRef
Fraser  D., , Thorpe  G., and Lambert  A., “Atmospheric turbulence visualization with wide-area motion-blur restoration,” J. Opt. Soc. Am. A. 16, , 1751 –1758 (1999). 0740-3232 CrossRef
Frakes  D. H., , Monaco  J. W., and Smith  M. J. T., “Suppression of atmospheric turbulence in video using an adaptive control grid interpolation approach,” in  IEEE Int. Conf. on Acoustics, Speech, and Signal Processing (ICASSP 2001) , vol. 3, pp. 1881 –1884 (2001).
Li  D., , Mersereau  R. M., and Simske  S., “Atmospheric turbulence-degraded image restoration using principal components analysis,” IEEE Geosci. Remote Sens. Lett.. 4, , 340 –344 (2007).CrossRef
Zhu  X., and Milanfar  P., “Image reconstruction from videos distorted by atmospheric turbulence,” Proc. SPIE. 7543, , 75430S  (2010). 0277-786X CrossRef
Zhu  X., and Milanfar  P., “Removing atmospheric turbulence via space-invariant deconvolution,” IEEE Trans. Pattern Anal. Mach. Intell.. 35, , 157 –170 (2013). 0162-8828 CrossRef
Hardie  R. C., and Droege  D. R., “Atmospheric turbulence correction for infrared video,” in  Proc. of the Military Sensing Symposia (MSS), Passive Sensors ,  Orlando, Florida  (2009).
Hardie  R. C., , Droege  D. R., and Hardin  K. M., “Real-time atmospheric turbulence correction for complex imaging conditions,” in  Proc. of the Military Sensing Symposia (MSS), Passive Sensors ,  Orlando, Florida  (2010).
Hardie  R. C., , Droege  D. R., and Hardin  K. M., “Real-time atmospheric turbulence with moving objects,” in  Proc. of the Military Sensing Symposia (MSS), Passive Sensors ,  Orlando, Florida  (2011).
Hardie  R. C.  et al., “Real-time video processing for simultaneous atmospheric turbulence mitigation and super-resolution and its application to terrestrial and airborne infrared imaging,” in  Proc. of the Military Sensing Symposia (MSS), Passive Sensors ,  Pasadena, California  (2012).
Droege  D. R.  et al., “A real-time atmospheric turbulence mitigation and super-resolution solution for infrared imaging systems,” Proc. SPIE. 8355, , 83550R  (2012). 0277-786X CrossRef
van Eekeren  A. W. M.  et al., “Patch-based local turbulence compensation in anisoplanatic conditions,” Proc. SPIE. 8355, , 83550T  (2012). 0277-786X CrossRef
van Eekeren  A. W. M., , Dijk  J., and Schutte  K., “Turbulence mitigation methods and their evaluation,” Proc. SPIE. 9249, , 92490O  (2014). 0277-786X CrossRef
Halder  K. K., , Tahtali  M., and Anavatti  S. G., “Geometric correction of atmospheric turbulence-degraded video containing moving objects,” Opt. Express. 23, , 5091 –5101 (2015). 1094-4087 CrossRef
Molina-Martel  F., , Baena-Gall  R., and Gladysz  S., “Fast PSF estimation under anisoplanatic conditions,” Proc. SPIE. 9641, , 96410I  (2015). 0277-786X CrossRef
Hardie  R. C.  et al., “Simulation of anisoplanatic imaging through optical turbulence using numerical wave propagation,” Opt. Eng.. (2017) (to appear).
Schmidt  J. D., Numerical Simulation of Optical Wave Propagation with Examples in MATLAB. , SPIE Press Monograph,  SPIE Press ,  Bellingham, Washington  (2010).
Smith  F. G., The Infrared and Electro-Optical Systems Handbook: Volume 2 Atmospheric Propagation of Radiation. ,  SPIE Press ,  Bellingham, Washington  (1993).
Goodman  J. W., Introduction to Fourier Optics. , 3rd ed.,  Roberts and Company Publishers ,  Greenwood Village, Colorado  (2004).
Charnotskii  M. I., “Anisoplanatic short-exposure imaging in turbulence,” J. Opt. Soc. Am. A. 10, , 492 –501 (1993). 0740-3232 CrossRef
Basu  S., , McCrae  J. E., and Fiorino  S. T., “Estimation of the path-averaged atmospheric refractive index structure constant from time-lapse imagery,” Proc. SPIE. 9465, , 94650T  (2015). 0277-786X CrossRef
McCrae  J. E., , Basu  S., and Fiorino  S. T., “Estimation of atmospheric parameters from time-lapse imagery,” Proc. SPIE. 9833, , 983303  (2016). 0277-786X CrossRef
Yaroslavsky  L.  et al., “Superresolution in turbulent videos: making profit from damage,” Opt. Lett.. 32, , 3038 –3040 (2007). 0146-9592 CrossRef
Yaroslavsky  L. P.  et al., “Super-resolution of turbulent video: potentials and limitations,” Proc. SPIE. 6812, , 681205  (2008). 0277-786X CrossRef
Lucas  B. D., and Kanade  T., “An iterative image registration technique with an application to stereo vision,” in  Int. Joint Conf. on Artificial Intelligence ,  Vancouver  (1981).
Hardie  R. C., , Barnard  K. J., and Ordonez  R., “Fast super-resolution with affine motion using an adaptive Wiener filter and its application to airborne imaging,” Opt. Express. 19, , 26208 –26231 (2011). 1094-4087 CrossRef
Hardie  R. C., and Barnard  K. J., “Fast super-resolution using an adaptive Wiener filter with robustness to local motion,” Opt. Express. 20, , 21053 –21073 (2012). 1094-4087 CrossRef
Huang  Y.-W.  et al., “Survey on block matching motion estimation algorithms and architectures with new results,” J. VLSI Signal Process. Syst.. 42, , 297 –320 (2006).CrossRef
Gonzalez  R. C., and Woods  R. E., Digital Image Processing. , 3rd ed.,  Prentice-Hall, Inc. ,  Upper Saddle River, New Jersey  (2006).
Oppenheim  A. V., and Schafer  R. W., Discrete-Time Signal Processing. , 3rd ed.,  Prentice-Hall, Inc. ,  Upper Saddle River, New Jersey  (2010).

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Related Book Chapters

Topic Collections

PubMed Articles
Advertisement
  • Don't have an account?
  • Subscribe to the SPIE Digital Library
  • Create a FREE account to sign up for Digital Library content alerts and gain access to institutional subscriptions remotely.
Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).
Access This Proceeding
Sign in or Create a personal account to Buy this article ($15 for members, $18 for non-members).
Access This Chapter

Access to SPIE eBooks is limited to subscribing institutions and is not available as part of a personal subscription. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.