Special Section on Active Electro-Optical Sensing: Phenomenology, Technology, and Applications

Deep-turbulence wavefront sensing using digital-holographic detection in the off-axis image plane recording geometry

[+] Author Affiliations
Mark F. Spencer

Air Force Research Laboratory, Directed Energy Directorate, 3550 Aberdeen Avenue Southeast, Kirtland Air Force Base, New Mexico 87111, United States

Air Force Institute of Technology, Department of Engineering Physics, 2950 Hobson Way, Wright Patterson Air Force Base, Ohio 45433, United States

Robert A. Raynor, Dan K. Marker

Air Force Research Laboratory, Directed Energy Directorate, 3550 Aberdeen Avenue Southeast, Kirtland Air Force Base, New Mexico 87111, United States

Matthias T. Banet

New Mexico Institute of Mining and Technology, Physics Department, 801 Leroy Place, Socorro, New Mexico 87801, United States

Opt. Eng. 56(3), 031213 (Oct 31, 2016). doi:10.1117/1.OE.56.3.031213
History: Received August 1, 2016; Accepted October 6, 2016
Text Size: A A A

Open Access Open Access

Abstract.  This paper develops wave-optics simulations which explore the estimation accuracy of digital-holographic detection for wavefront sensing in the presence of distributed-volume or “deep” turbulence and detection noise. Specifically, the analysis models spherical-wave propagation through varying deep-turbulence conditions along a horizontal propagation path and formulates the field-estimated Strehl ratio as a function of the diffraction-limited sampling quotient and signal-to-noise ratio. Such results will allow the reader to assess the number of pixels, pixel field of view, pixel-well depth, and read-noise standard deviation needed from a focal-plane array when using digital-holographic detection in the off-axis image plane recording geometry for deep-turbulence wavefront sensing.

Figures in this Article

Digital-holographic detection shows distinct potential for applications that involve wavefront sensing in the presence of deep turbulence. As shown in Fig. 1, the use of digital-holographic detection in the off-axis image plane recording geometry (IPRG) provides access to an estimate of the amplitude and wrapped phase (i.e., the complex field) that exist in the exit-pupil plane of the imaging system. From the complex-field estimate, we can then pursue a multitude of applications such as atmospheric characterization,1 free-space laser communications,2 and adaptive-optics phase compensation.3

Graphic Jump Location
Fig. 1
F1 :

A description of digital-holographic detection in the off-axis IPRG. Here, a highly coherent master-oscillator (MO) laser is split into two optical trains. The first optical train actively illuminates an unresolved cooperative object. Analogously, the second optical train creates an off-axis local oscillator (LO), so that tilted-spherical-wave illumination is incident on an FPA. The spherical-wave reflections from an unresolved cooperative object then back propagate through deep-turbulence conditions, and upon being imaged onto the FPA coherently interfere with the tilted-spherical-wave illumination from the off-axis LO. In turn, the recorded interference pattern on the FPA is known as a digital hologram, and upon taking a 2-D IFFT, we can obtain an estimate of the wrapped phase (and amplitude) that exists in the exit-pupil plane of the imaging system.

The published literature often makes use of digital-holographic detection in the off-axis pupil plane or on-axis phase shifting recording geometries;4 however, in terms of simplicity, the off-axis IPRG shown in Fig. 1 offers a nice combination of functionality.5 For instance, when considering digital-holographic detection for applications that involve deep-turbulence wavefront sensing, the off-axis IPRG allows for the following multifunction capabilities.

  • Incoherent imaging through passive illumination of an object.
  • Coherent imaging through active illumination of an object.
  • Digital-holographic detection through the interference of a signal with a reference.
  • Estimation of the amplitude and wrapped phase via a two-dimensional (2-D) inverse fast Fourier transform (IFFT) of the hologram irradiance recorded on the focal-plane array (FPA).

From a beam-control stand point,6 the multifunction capabilities listed above allow for a robust user interface which is not limited to wavefront sensing in the presence of an unresolved cooperative object (cf. Fig. 1). In practice, digital-holographic detection allows for the estimation of the complex field in the presence of an extended noncooperative object via speckle averaging and image sharpening algorithms or the angular diversity created by using multiple transmitters and receivers.718 This versatility allows for long-range imaging,19 three-dimensional imaging,20 laser radar,21 and synthetic-aperture imaging.22 In general, the applications are abundant.23,24

With wavefront-sensing applications in mind, the presence of deep turbulence tends to be the “Achilles’ heel” to modern-day solutions [e.g., the Shack–Hartmann wavefront sensor (WFS),25 which provides access to localized wavefront slope estimates]. This is said because coherent-light propagation through deep turbulence causes scintillation, which manifests as time-varying constructive and destructive interference between the object and receiver planes. The log-amplitude variance, which is also referred to as the Rytov number, gives a measure for the strength of the scintillation experienced by the coherent light. As the log-amplitude variance grows above 0.25 (for a spherical wave), total-destructive interference gives rise to branch points in both the coherent light transmitted to the object and the coherent light received from the object. These branch points add a rotational component to the phase function that traditional-least-squares phase reconstruction algorithms cannot account for within the analysis. As such, the rotational component is often referred to as the “hidden phase” due to the foundational work of Fried.26

In converting local wavefront slope estimates into unwrapped phase, the hidden phase gets mapped to the null space of traditional-least-squares phase reconstruction calculations. In turn, the unwrapped phase (i.e., the irrotational component) does not contain the branch points and associated branch cuts, which are unavoidable 2π phase discontinuities within the phase function.27 Note that branch-point-tolerant phase reconstruction algorithms do exist within the published literature;2831 however, the performance of these algorithms needs to be quantified in hardware.32

In addition to causing scintillation, the horizontal, low-altitude, and long-range propagation paths that are reminiscent of deep-turbulence conditions can also lead to increased extinction. This outcome results in reduced transmittance due to molecular and aerosol absorption and scattering all along the propagation path.33,34 In turn, we can concisely say that scintillation and extinction simply lead to low signal-to-noise ratios (SNRs) when performing deep-turbulence wavefront sensing. This is said because scintillation and extinction result in total-destructive interference and light-efficiency losses, respectively, over the field of view (FOV) of the WFS.

Provided enough signal, there are interferometric wavefront-sensing techniques that perform well in the presence of deep turbulence (e.g., the point-diffraction and self-referencing interferometers,35,36 which create a reference by amplitude splitting and spatially filtering the received signal); however, in using these techniques, we cannot realistically approach a shot-noise-limited detection regime. In turn, digital-holographic detection offers a distinct way forward to combat the low SNRs caused by scintillation and extinction. In using digital-holographic detection, we can set the strength of the reference so that it boosts the signal above the read-noise floor of the FPA.

This paper explores the estimation accuracy of digital-holographic detection in the off-axis IPRG for wavefront sensing in the presence of deep turbulence and detection noise. As shown in Fig. 1, the analysis uses an ideal point-source beacon in the object plane to represent the active illumination of an unresolved cooperative object. The resulting spherical wave propagates along a horizontal propagation path through the deep-turbulence conditions that are of interest in this paper. In what follows, Sec. 2 reviews the setup and exploration of the problem space described above in Fig. 1. Section 3 then provides results with discussion, and Sec. 4 concludes this paper. Before moving on to the next section, it is important to note that a lot of the simulation framework used in this paper originates from an earlier conference paper by Spencer et al.37 It is our belief that this paper greatly extends upon the work contained in Ref. 37 by including the deleterious effects of detection noise within the analysis.

This section discusses the setup and exploration needed for a series of computational wave-optics experiments which identify the performance of digital-holographic detection in the off-axis IPRG for wavefront sensing in the presence of deep turbulence and detection noise. The analysis uses many of the principles taught by Schmidt and Voelz in relatively recent SPIE Press publications.38,39 In addition, the analysis uses MATLAB® with the help of AOTools and WaveProp.40,41 The Optical Sciences Company (tOSC) created these robust MATLAB® toolboxes specifically for wave-optics simulations of this nature.

As shown in Fig. 1, the goal for the following analysis is to model digital-holographic detection in the off-axis IPRG for the purposes of deep-turbulence wavefront sensing. With Fig. 1 in mind, we need to further define the experimental parameter space. To help orient the reader, Fig. 2 pictorially shows the various planes of interest within the analysis. Note that the entrance-pupil plane effectively collimates the propagated light from the object plane, whereas the exit-pupil plane effectively focuses the propagated light to form the image plane at focus.

Graphic Jump Location
Fig. 2
F2 :

A description of the experimental parameter space used within the computational wave optics experiments.

Model Setup and Exploration

Provided Fig. 2 and 1, we can determine the 2-D Fourier transformation of the hologram photoelectron density DH(x2,y2) as Display Formula

D˜H(x1λf,y1λf)=ηThνwxsinc(wxλfx1)wysinc(wyλfy1)[1λ2f2US(x1,y1)*US*(x1,y1)+|AR|2δ(x1)δ(y1)+AR*ejkfjλfUS(x1xR,y1yR)ARejkfjλfUS*(x1+xR,y1+yR)]*comb(xsλfx1)comb(ysλfy1)*Nxsλfsinc(Nxsλfx1)Mysλfsinc(Mysλfy1),(1)
in units of photoelectrons (pe). This result is remarkably physical, as the sampling theorem dictates that a sampled function becomes periodic upon finding its spectrum.42,43 Through 2-D convolution with the separable comb functions and the convolution-sifting property of the impulse function, the terms contained within square brackets in Eq. (1) are repeated at intervals of λf/xs and λf/ys along the x and y axes, respectively. Thus, the final 2-D convolution with the separable narrow sinc functions serves to smooth out these repeated terms, whereas the amplitude modulation with the separable broadened sinc functions serves to dampen out these repeated terms.

To help simplify the analysis to a case that we can easily simulate using N×N computational grids, let us assume that the FPA has adjacent square pixels, so that xs=ys=wx=wy=wp. In so doing, we can rewrite Eq. (1) in terms of the diffraction-limited sampling quotient QI, where Display Formula

QI=λfD1wp.(2)
Physically, there are multiple ways to think about the relationship given in Eq. (2). One way is to say that the diffraction-limited sampling quotient QI is a measure of the number of FPA pixels across the diffraction-limited half width of the incoherent point-spread function (PSF). Remember that for linear shift-invariant imaging systems, the incoherent PSF is the irradiance associated with an imaged point source [i.e., the squared magnitude of Eq. (25) in 1].38 Another way to think about the diffraction-limited sampling quotient, QI, is to say that it is a measure of the number of diffraction angles, λ/D1, per pixel FOV, wp/f, assuming small angles. In turn, the relationship given in Eq. (2) allows us to vary the sampling with the FPA pixels.

Using Eq. (2), we can rewrite Eq. (1) in terms of the diffraction-limited sampling quotient QI, such that Display Formula

D˜H(x1λf,y1λf)=ηThνwpsinc(x1QID1)wpsinc(y1QID1)[1λ2f2US(x1,y1)*US*(x1,y1)+|AR|2δ(x1)δ(y1)+AR*ejkfjλfUS(x1xR,y1yR)ARejkfjλfUS*(x1+xR,y1+yR)]*comb(x1QID1)comb(y1QID1)*NQID1sinc(x1QID1/N)NQID1sinc(y1QID1/N).(3)
Here, QID1=λf/wp is the side length of the N×N computational grid in the Fourier plane. Note that as N [cf. Eq. (37) in 1], we can make use of the convolution-sifting property of the impulse function and neglect the final 2-D convolution in Eq. (3). Accordingly, for large N the smoothing becomes minimized; however, for small N the smoothing becomes more pronounced. Let us assume that xR=yR=QID1/4, so that the last two terms within the square brackets in Eq. (3) shift diagonally. When QI4, the last two terms no longer overlap with the first two terms which are centered on axis. Correspondingly, when 2QI<4, the last two terms are still resolvable within the side length of the N×N computational grid but overlap with the first term. Provided that N is constant, this latter case allows for us to obtain more samples across the exit-pupil diameter D1, which in turn minimizes the smoothing caused by the final 2-D convolution in Eq. (3). If the amplitude of the reference is set to be well above the amplitude of the signal (i.e., |AR||AS|), then this functional overlap becomes negligible—a fundamental result obtained in Ref. 37.

Provided Eq. (3), we must use a window function w(x1,y1) to obtain an estimate U^S(x1,y1) of the desired signal complex field US(x1,y1) [cf. Fig. 2 and Eq. (26) in 1]. Specifically, Display Formula

U^S(x1,y1)=w(x1,y1)D˜H(x1λf,y1λf).(4)
In using Eq. (4), we must satisfy Nyquist sampling with the FPA pixels,42 so that the repeated terms within Eq. (3) do not overlap and cause significant aliasing. As such, the Nyquist rate is QID1=λf/wp and the Nyquist interval is 1/QID1=wp/λf when xR=yR=QID1/4. Assuming that N, QI2, |AR||AS|, and Display Formula
w(x1,y1)=cyl[(x1xR)2+(y1yR)2D1]ηThνAR*ejkfwp2jλfsinc(x1QID1)sinc(y1QID1),(5)
Eq. (4) simplifies, such that Display Formula
U^S(x1,y1)US(x1,y1).(6)
In turn, there is a distinct trade space found in using Eq. (3). We will explore this trade space in the presence of deep turbulence and detection noise in the analysis to come.

Before moving on to the simulation setup and exploration, it is informative to develop a closed-form expression for the analytical SNR. For this purpose, we can approximate the estimated signal power P^S as Display Formula

P^Sm¯Rm¯S,(7)
where Display Formula
m¯R=ηThν|AR|2wp2(8)
is the mean number of reference photoelectrons detected per pixel and Display Formula
m¯S=ηThν|AS|2wp2(9)
is the mean number of signal photoelectrons detected per pixel. Now we need to account for the estimated noise power P^N.

Pixel to pixel, the FPA creates photoelectrons via statistically independent (i.e., delta correlated) and zero-mean random processes, so that the variance σ2 is equivalent to the noise power. Here, Display Formula

σ2=m¯S+m¯R+m¯B+σC2,(10)
where mB is the mean number of photoelectrons associated with the background illumination (e.g., from passive illumination from the sun) and σC2 is the variance associated with pixel read noise (i.e., the FPA circuitry). In writing Eq. (10), note that we assume the use of a Poisson-distributed random process for the various sources of illumination that are incident on the FPA. In so doing, the mean number of photoelectrons is equal to the variance of the photoelectrons.44,45 Also note that we assume the use of a Gaussian-distributed random process for the various sources of pixel read noise in the FPA.

Provided Eq. (10), the estimated noise power P^N follows from the noise variance σ2 as Display Formula

P^N=Rσ2,(11)
where Display Formula
R=π4QI2(12)
is the ratio of the area associated with the window function w(x1,y1) to the area associated with the side length QID1=λf/wp of the N×N computational grid in the Fourier plane. The analytical SNR then follows from Eqs. (7)–(12) as Display Formula
SNR=P^SP^N=4QI2πm¯Sm¯Rm¯S+m¯R+m¯B+σC2.(13)
We will validate the use of this closed-form expression in the simulation setup and exploration to follow.

Simulation Setup and Exploration

For all of the computational wave-optics experiments presented throughout this paper, we used N×N computational grids. For example, to simulate the propagation of an ideal point-source beacon though deep-turbulence conditions, we used 4096×4096 grid points and the split-step beam propagation method (BPM).3841 WaveProp and AOTools made use of a very narrow sinc function with a raised-cosine envelope to simulate an ideal point-source beacon. The sampling of this function and the object-plane side length was automatically set, so that after propagation from the object plane to the entrance-pupil plane, the illuminated region of interest was half the user-defined, entrance-pupil plane side length (cf. Fig. 2). Put another way, the simulations satisfied Fresnel scaling [i.e., N=S1S2/(λZ), where S1=16D1 and S2 are the object and entrance-pupil side lengths, respectively]. Altogether, this provided an entrance-pupil plane side length of D1 after cropping out the center 256×256 grid points. As mentioned previously, using ideal thin lenses the entrance-pupil plane effectively collimated the propagated light from the object plane, whereas the exit-pupil plane effectively focused the propagated light to form the image plane at focus (cf. Fig. 2).

As listed in Table 1, we used five different horizontal-path scenarios to create the deep-turbulence trade space of interest in this paper. Provided the index of refraction structure parameter Cn2, we determined the log-amplitude variances for a plane wave, σχpw2, and a spherical wave, σχsw2, using the following equations:34Display Formula

σχpw2=0.307k7/6Z11/6Cn2(14)
and Display Formula
σχsw2=0.124k7/6Z11/6Cn2,(15)
where k=2π/λ is again the angular wavenumber, λ=1  μm is the wavelength, and Z=7.5  km is the propagation distance (cf. Fig. 2). In addition, we determined the coherence diameters for a plane wave, r0pw, and a spherical wave, r0sw, using the following equations:34Display Formula
r0pw=0.185(λ2ZCn2)3/5(16)
and Display Formula
r0sw=0.33(λ2ZCn2)3/5.(17)
Based on Eqs. (14)–(17), the computational wave-optics experiments used 10 phase screens with equal spacing to simulate the propagation of an ideal point-source beacon through deep-turbulence conditions using the BPM. This choice provided low percentage errors (less than 0.5%) between the continuous and discrete calculations using Eqs. (14)–(17).38

Table Grahic Jump Location
Table 1The deep-turbulence trade space of interest in this paper. Remember that the log-amplitude variance σχ2, which is also referred to as the Rytov number, gives a measure for the strength of the scintillation. As the σχ2 grows above 0.25 (for a spherical wave), scintillation gives rise to branch points in the phase function. Also remember that the coherence diameter r0, which is also referred to as the Fried parameter, gives a measure for the achievable imaging resolution. As the ratio of exit-pupil diameter D1 to r0 grows above 4 (for a spherical wave), higher-order aberrations beyond tilt start to limit the achievable imaging resolution. Here, D1=30cm.

Propagation to the image plane from the exit-pupil plane occurred via a three-step process using WaveProp and AOTools: (1) by doubling the number of N×N grid points in the exit-pupil plane with a side length of D1 from 256×256 grid points to 512×512 grid points via zero padding; (2) numerically solving the convolution form of the Fresnel diffraction integral via 2-D FFTs; and (3) cropping out the center 256×256 grid points, so that f=QID12/(256λ) (i.e., the image plane side length was equal to the exit-pupil plane side length). As shown in Fig. 3, by varying the diffraction-limited sampling quotient, QI, the number of FPA pixels across the diffraction-limited imaging bucket, D2, also varied. Here, D2=2.44λf/D1 with D1=30  cm.

Graphic Jump Location
Fig. 3
F3 :

(a, b) The normalized signal (c, d) and normalized digital hologram, in the image plane for a constant SNR, where the analytical SNR is 20. As the diffraction-limited sampling quotient, QI, increases, the number of FPA pixels contained within the diffraction-limited imaging diameter, D2 (white circles), increases proportionally. Note that the results presented here contain no aberrations.

For all of the computational wave-optics experiments presented in this paper (including those contained in Fig. 3), we set the pixel read-noise standard deviation to 100 pe and the pixel well depth to 100×103 pe. To simulate different SNRs [cf. Eq. (13)], we neglected to include background-illumination effects, and we set the amplitude of the reference |AR| to produce a mean number of reference photoelectrons detected per pixel equal to 25% of the pixel well depth (i.e., m¯B=0 and m¯R=25×103 pe) [cf. Eq. (8)]. We then scaled the amplitude of the signal |AS| to have the appropriate mean number of signal photoelectrons m¯S detected per pixel [cf. Eq. (9)]. As such, the standard deviation of the shot noise varied within the simulations and was the dominate source of detection noise.

Remember that in the IPRG (cf. Figs. 1 and 2), digital-holographic detection provides access to an estimate of the amplitude and wrapped phase (i.e., the complex field) that exist in the exit-pupil plane of the imaging system. We obtained access to this complex-field estimate using the following steps: (1) within the image plane, interfering the signal with the reference [cf. Eq. (29) in 1]; (2) recording the hologram irradiance on the FPA to create a digital hologram with Poisson-distributed shot noise and Gaussian-distributed pixel read noise; (3) taking the 2-D IFFT of the digital hologram to go to the Fourier plane; and (4) within the Fourier plane, windowing the off-axis complex-field estimate. To perform an apples-to-apples comparison, we kept the total FOV constant and varied the number of pixels N across the FPA, such that Display Formula

N=FOVfwp=FOVQID1λ,(18)
where FOV=64λ/D1. This choice ensured that we had the same number of pixels and effective detection noise across our complex-field estimates despite the fact that we varied the diffraction-limited sampling quotient QI within the computational wave-optics experiments. Here again, f=QID12/(256λ) was the focal length and Q1D1=λf/wp was the side length.

To generate results for the entire deep-turbulence trade space (cf. Table 1), we used the field-estimated Strehl ratio SF, such that Display Formula

SF=|US(x1,y1)U^S*(x1,y1)|2|US(x1,y1)|2|U^S(x1,y1)|2,(19)
where US(x1,y1) and U^S(x1,y1) are the “truth” and “estimated” signal complex fields, respectively, and denotes mean. This performance metric bears some resemblance to a Strehl ratio, which in practice provides a normalized measure for performance. In Eq. (19), if US(x1,y1)=U^S(x1,y1), then SF=1. Else if US(x1,y1)U^S(x1,y1), then SF<1. Thus, Eq. (19) is copasetic with the general understanding of a Strehl ratio and provides a normalized measure for field-estimated performance. Note that Eq. (19) ultimately stems from the following definition of the on-axis Strehl ratio:40,41Display Formula
S=|US(x1,y1)|2|US(x1,y1)|2.(20)
Here, we have made use of the fact that the mean of a pupil-plane quantity is equivalent to the on-axis DC term of the 2-D Fourier transformation of that pupil-plane quantity.

Shown in Figs. 4(a) and 4(b) is the wrapped phase and in Figs. 4(c) and 4(d) the normalized amplitude in the Fourier plane for one independent realization of scenario 5 in Table 1 and detection noise. In Fig. 4, one can identify the complex-field estimate within the white circles of diameter D1. Specifically, as the diffraction limited sampling quotient QI increases, so does the side length of the Fourier plane; however, the exit-pupil diameter D1 remains constant. By windowing the data found within the white circles in Fig. 4, we obtained the results shown in Fig. 5. Here, we see that as the diffraction limited sampling quotient, QI, increases, the field-estimated Strehl ratio, SF, decreases.

Graphic Jump Location
Fig. 4
F4 :

(a, b) The wrapped phase and (c, d) normalized amplitudes associated with the Fourier plane for a constant SNR, where the analytical SNR is 20 and the numerical SNR is 21.5. In general, the Fourier plane contains the complex-field estimate (i.e., an estimate of the amplitude and wrapped phase that exists in the exit-pupil plane of the imaging system). The results show that as the diffraction-limited sampling quotient, QI, increases, the complex-field estimates contained within an exit-pupil diameter, D1 (white circles), take up less and less space within the Fourier plane because the side length of the Fourier plane, QID1, increases proportionally.

Graphic Jump Location
Fig. 5
F5 :

(a) The wrapped-phase truth and (b–d) wrapped-phase estimates for a constant SNR, where the analytical SNR is 20 and the numerical SNR is 21.5. In general, by windowing out the appropriate data in the Fourier plane (white circles in Fig. 4), we obtain the complex-field estimate (i.e., an estimate of the amplitude and wrapped phase that exists in the exit-pupil plane of the imaging system). The results contained in (a–d) show that as the diffraction-limited sampling quotient, QI, increases, the field-estimated Strehl ratio, SF, decreases ever so slightly.

To determine the numerical SNR presented in Figs. 4 and 5, we performed the following steps using the numerical data found in Figs. 4(b) and 4(d) corresponding to a diffraction limited sampling quotient of QI=4.

  • Using the numerical data contained in the bottom-right circle, we computed the mean of the squared magnitude of the complex-field estimate to numerically determine the estimated signal power plus the noise power P^S+N [cf. Eqs. (7) and (11)].
  • Next, using the numerical data contained in the bottom-left circle, we computed the mean of the squared magnitude of the detection noise to numerically determine the estimated noise power P^N.
  • Subtracting the first calculation from the second, we numerically determined the estimated signal power, so that P^S=P^S+NP^N.
  • The numerically determined SNR then followed as SNR=P^S/P^N.

We also used these steps to validate the use of the closed-form expression contained in Eq. (13). For this purpose, Fig. 6 presents percentage error results as a function of the analytical SNR. In Fig. 6, we averaged the results obtained from 20 independent realizations of scenarios 1 and 5 in Table 1 and 20 independent realizations of detection noise. Note that the error bars depict the width of the standard deviation. Also note that we only used numerical data corresponding to a diffraction limited sampling quotient of QI=4, so that there was no functional overlap contained within the results [cf. Eq. (3)].

Graphic Jump Location
Fig. 6
F6 :

The average percentage error as a function of the analytical SNR for the deep-turbulence trade space presented in Table 1. Here, the results show that as the analytical SNR increases, the average percentage error decreases between the numerical and analytical SNRs. Note that the error bars depict the width of the standard deviation for 400 realizations.

The analysis used multiple image-processing tricks to obtain the results presented in Figs. 36. With that said, the first image-processing trick was to subtract the mean from the recorded digital hologram. This removed the on-axis DC term from the numerical data contained in the Fourier plane. Next, the analysis applied a raised cosine window to the zero-mean digital hologram with eight-pixel-wide tapers at the edges of the FPA. This combined with zero-padding helped to mitigate the effects of aliasing from using N×N computational grids and 2-D IFFTs.3841 In practice, the analysis zero-padded the windowed zero-mean digital hologram to ensure that the complex-field estimate in the Fourier plane contained 256×256 grid points within the exit-pupil diameter D1. This outcome provided the same number of grid points as the exit-pupil plane for the sake of computing the field-estimated Strehl ratio SF with the “truth” complex field [cf. Eq. (19)]. Note that these image-processing tricks also apply to the results presented in the next section.

Figure 7 shows field-estimated Strehl ratio SF results as a function of the diffraction-limited sampling quotient QI. Here, we averaged the results obtained from 20 independent realizations of scenarios 1 to 5 in Table 1 and 20 independent realizations of detection noise. In Fig. 7, the error bars depict the width of the standard deviation. With this in mind, the analytical SNR increases from 1 in Fig. 7(a) to 10, 20, and 100 in Fig. 7(b), 7(c), and 7(d), respectively [cf. Eq. (13)]. Note that as the analytical SNR increases, the performance trends flip flop. This outcome is due to functional overlap introducing additional shot noise into the complex-field estimate when 2QI<4. As QI increases, this functional overlap decreases and the additional shot noise plays less of a role depending on the amount of smoothing [cf. Eq. (3)].

Graphic Jump Location
Fig. 7
F7 :

The average field-estimated Strehl ratio, SF, as a function of the diffraction-limited sampling quotient, QI, for the deep-turbulence trade space presented in Table 1. Here, the analytical SNR increases from 1 in (a) to 10, 20, and 100 in (b–d), respectively. The results contained in (a) and (b) show that as the diffraction-limited sampling quotient, QI, increases, the average field-estimated Strehl ratio, SF, decreases (i.e., for low SNRs, lower QI’s perform better). In contrast, the results contained in (c) and (d) show that as the diffraction-limited sampling quotient, QI, increases, the average field-estimated Strehl ratio, SF, increases (i.e., for high SNRs, higher QI’s perform better). Note that the error bars depict the width of the standard deviation for 400 realizations.

The results shown in Fig. 7 do not agree with the results presented in Ref. 37. This is said because the performance trends are opposite of those found in Ref. 37, particularly for high SNRs. Regardless of the strength of the aberrations, Ref. 37 showed that for a constant number of pixels N across the FPA, the average SF values are always greatest given QI=2. In general, lower QI’s provide more samples across the complex-field estimate, which in turn minimizes the smoothing caused by the final 2-D convolution in Eq. (3). The results presented in Ref. 37, however, did not include the deleterious effects of detection noise.

In the presence of detection noise, lower QI’s also increase the detection-noise sampling, which in turn degrades the complex-field estimate. To combat this effect, we chose to vary the number of pixels N across the FPA to keep the total FOV constant [cf. Eq. (18)]. With respect to Fig. 7, this choice decreases the amount of detection-noise sampling for lower QI’s but increases the amount of smoothing caused by the final 2-D convolution in Eq. (3).

Remember that if the amplitude of the reference is set to be well above the amplitude of the signal (i.e., |AR||AS|), then the functional overlap in Eq. (3) becomes negligible when 2QI<4. With that said, Ref. 37 set the amplitude of the reference to be 10 times that of the signal (i.e., |AR|2=100  W/m2 and |AS|2=1  W/m2) [cf. Eqs. (8) and (9)]. Radiometrically speaking, both of these values are impractical given the capabilities of modern-day, high-framerate, and short-wave-infrared (SWIR) FPAs. As such, the results presented in Fig. 7 tell the true story and the results presented in Ref. 37 tell the story given infinite SNR. Note that we would extend our results out to those obtained in Ref. 37; however, given the parameters of our FPA, we empirically determined that pixel saturation nominally occurs for analytical SNRs greater than 250 [cf. Eq. (13)]. This outcome occurs because of deep-turbulence scintillation (i.e., hotspots due to constructive interference).

The results presented in Fig. 7 ultimately show less than 5% variation in the SF values for the different QI values within each plot. In terms of efficiently using the FPA pixels, the reader might conclude that there are distinct benefits to operating at lower QI’s despite the minor (5%) performance penalty at high SNRs. Before moving on to the next section, it is important to note that provided different FPA parameters, such as a larger pixel well depth, the results presented in Fig. 7 might change; however, the parameters chosen for our FPA are indicative of modern-day, high-framerate, and SWIR FPAs.

The results presented in this paper serve two purposes. The first purpose is to validate the setup and exploration presented in Sec. 2. In turn, the second purpose is to allow the reader to assess the number of pixels, pixel FOV, pixel-well depth, and read-noise standard deviation needed from an FPA when using digital-holographic detection in the off-axis IPRG for deep-turbulence wavefront sensing.

Digital-holographic detection, in general, offers a distinct way forward to combat the low SNRs caused by scintillation and extinction, and it is our belief that the analysis presented throughout this paper shows that this statement is true. In using digital-holographic detection, we can set the strength of the reference so that it boosts the signal above the read-noise floor of the FPA. As such, we can approach a shot-noise-limited detection regime. This last statement is of course dependent on the parameters of the FPA, such as the pixel well depth. Nevertheless, given that scintillation and extinction lead to low SNRs, it is important that we reach the shot-noise limit in order to better perform deep-turbulence wavefront sensing. This outcome will allow future research efforts to better explore the associated branch-point problem.

Using the convolution form of the Fresnel diffraction integral (cf. Fig. 2), we can represent the signal complex field US(x2,y2) incident on the FPA as Display Formula

US(x2,y2)=ejkfjλfUS+(x1,y1)exp{jk2f[(x2x1)2+(y2y1)2]}dx1dy1,(21)
where US+(x1,y1) is the signal complex field leaving the exit-pupil plane. Specifically, Display Formula
US+(x1,y1)=US(x1,y1)TP(x1,y1),(22)
where US(x1,y1) is the signal complex field incident on the exit-pupil plane, and Display Formula
TP(x1,y1)=cyl(x12+y12D1)exp[jk2f(x12+y12)](23)
is the complex transmittance function of the exit-pupil plane (i.e., a circular aperture placed against a thin lens). In Eq. (23), Display Formula
cyl(ρ1)={10.500ρ1<0.5ρ1=0.5ρ1>0.5(24)
is a cylinder function where ρ1=x12+y12, D1 is the exit-pupil diameter, k=2π/λ is the angular wavenumber, λ is the wavelength, and f is the focal length. Substituting Eq. (22) into Eq. (21) we arrive at the following result: Display Formula
US(x2,y2)=ejkfjλfexp[jk2f(x22+y22)]F{US(x1,y1)}νx=x2λf,νy=y2λf,(25)
where Display Formula
US(x1,y1)=US(x1,y1)cyl(x12+y12D1)(26)
is the signal complex field that exists in the exit-pupil plane of the imaging system (cf. Fig. 2), and F{}vx,vy denotes a 2-D Fourier transformation, such that Display Formula
V˜(νx,νy)=F{V(x,y)}νx,νy=V(x,y)ej2π(xνx+yνy)dxdy.(27)
A 2-D inverse Fourier transformation then follows as Display Formula
V(x,y)=F1{V˜(νx,νy)}x,y=V˜(νx,νy)ej2π(xνx+yνy)dνxdνy.(28)
With Fig. 2 in mind, we can also represent the reference complex field UR(x2,y2) incident on the FPA as resulting from the Fresnel approximation to a tilted spherical wave. Here, Display Formula
UR(x2,y2)=ARexp[jk2f(x22+y22)]exp(j2πxRx2λf)exp(j2πyRy2λf),(29)
where AR is a complex constant and (xR,yR) are the coordinates of the off-axis local oscillator, which is located in the exit-pupil plane.

Provided Eqs. (25)–(29), we can determine the hologram irradiance IH(x2,y2) incident on the FPA as Display Formula

IH(x2,y2)=|US(x2,y2)+UR(x2,y2)|2(30)
in units of Watts per square meter (W/m2). For all intents and purposes, the FPA will convert the hologram irradiance IH(x2,y2), which is in an analog form, into a form that is suitable for digital image processing. Following the approach taken by Gaskill,42 let us assume that “digitization” is to take place at sampling intervals of xs and ys, which are the x- and y-axes pixel pitches of the FPA (cf. Fig. 2). At any particular pixel, we can then estimate the hologram irradiance IH(x2,y2) by computing its average value over the active area of a pixel, which is centered at x2=nxs and y2=mys, where n=1 to N and m=1 to M. Specifically, Display Formula
I^H(nxs,mys)=IH(x2,y2)1wxrect(x2nxswx)1wyrect(y2myswy)dx2dy2,(31)
where wx and wy are, respectively, the x- and y-axes pixel widths of the FPA, and Display Formula
rect(x)={0,|x|>0.50.5,x=0.51,|x|<0.5(32)
is a rectangle function.

Neglecting the effects of pixel edge diffusion in the FPA,4 remember that the number of hologram photoelectrons mH(nxs,mys), at any particular pixel and time interval, is a random process with mean,44Display Formula

m¯H(nxs,mys)=ηThνI^H(nxs,mys)wxwy.(33)
Here, η is the quantum efficiency of the FPA, T is the integration time of the FPA, hν is the quantized photon energy, and the quantity, wxwy, is the active area of a pixel. Over the entire FPA, it then follows that the hologram photoelectron density DH(x2,y2), in units of photoelectrons per square meter (pe/m2), is simply a sampled version of the analog form of Eq. (33). This declaration leads to the following expressions: Display Formula
DH(x2,y2)=m¯H(x2,y2)1xscomb(x2xs)1yscomb(y2ys)rect(x2Nxs)rect(y2Mys),(34)
where Display Formula
m¯H(x2,y2)=ηThνIH(x2,y2)rect(x2x2wx)rect(y2y2wy)dx2dy2=ηThνIH(x2,y2)rect(x2x2wx)rect(y2y2wy)dx2dy2=ηThνIH(x2,y2)*rect(x2wx)rect(y2wy)(35)
is the analog form of Eq. (33), Display Formula
1|w|comb(xw)=n=δ(xnw)(36)
is a scaled comb function, Display Formula
δ(xx)=limw01|w|p(xxw)(37)
is an impulse function,43 and p(x) is a pulse-like function {e.g., the rectangle function [cf. Eq. (32)]}. Note that in Eq. (35), * denotes 2-D convolution, such that Display Formula
V(x,y)*W(x,y)=V(x,y)W(xx,yy)dxdy,(38)
where x and y are dummy variables of integration.

From Eqs. (30)–(38), we can gain access to an estimate of the signal complex field US(x1,y1) that exists in the exit-pupil plane of the imaging system [cf. Fig. 2 and Eq. (26)]. First, we let x2=λfνx and y2=λfνy and apply a 2-D inverse Fourier transformation to Eq. (34), such that Display Formula

F1{DH(λfνx,λfνy)}x1,y1=1λ2f2D˜H(x1λf,y1λf)=ηThνF1{I˜H(λfνx,λfνy)}x1,y1wxsinc(wxλfx1)wysinc(wyλfy1)*1λfcomb(xsλfx1)1λfcomb(ysλfy1)*Nxsλfsinc(Nxsλfx1)Mysλfsinc(Mysλfy1),(39)
where sinc(x)=sin(πx)/(πx) is a sinc function. Taking a look at the remaining 2-D inverse Fourier transformation in Eq. (39), we obtain the following relationship: Display Formula
F1{I˜H(λfνx,λfνy)}x1,y1=F1{|US(λfνx,λfνy)|2}x1,y1+F1{|UR(λfνx,λfνy)|2}x1,y1+F1{US(λfνx,λfνy)UR*(λfνx,λfνy)}x1,y1+F1{UR(λfνx,λfνy)US*(λfνx,λfνy)}x1,y1,(40)
where the superscript * denotes complex conjugate. From Eqs. (25) and (29), it then follows that Display Formula
F1{I˜H(λfνx,λfνy)}x1,y1=1λ2f2US(x1,y1)*US*(x1,y1)+|AR|2δ(x1)δ(y1)+AR*ejkfjλfUS(x1xR,y1yR)ARejkfjλfUS*(x1+xR,y1+yR).(41)
The first term in Eq. (41) is nothing more than a scaled 2-D autocorrelation of the desired signal complex field US(x1,y1). This term is centered on axis and is physically twice the circumference of the exit-pupil diameter D1. The second term in Eq. (41) is also centered on axis and contains separable impulse functions [cf. Eq. (37)]. These impulse functions are at the strength of the uniform irradiance associated with the reference (i.e., |AR|2). The last two terms in Eq. (41) form complex conjugate pairs and contain the desired signal complex field US(x1,y1), both scaled and shifted off axis by the coordinates (xR,yR).

Substituting Eq. (41) into Eq. (39), we obtain the following result after rearranging the special functions: Display Formula

D˜H(x1λf,y1λf)=ηThνwxsinc(wxλfx1)wysinc(wyλfy1)[1λ2f2US(x1,y1)*US*(x1,y1)+|AR|2δ(x1)δ(y1)+AR*ejkfjλfUS(x1xR,y1yR)ARejkfjλfUS*(x1+xR,y1+yR)]*comb(xsλfx1)comb(ysλfy1)*Nxsλfsinc(Nxsλfx1)Mysλfsinc(Mysλfy1),(42)
in units of photoelectrons (pe). This result is repeated above in Eq. (1).

The authors would like to thank Samuel T. Thurman for his careful review and insightful comments toward a draft form of this completed paper. In addition, the authors would like to thank Paul F. McManamon for his invitation to submit to a special section call of Optical Engineering. This research was funded by the High Energy Laser Joint Technology Office. The views expressed in this document are those of the authors and do not necessarily reflect the official policy or position of the Air Force, the Department of Defense, or the U.S. government.

Sasiela  R. J., Electromagnetic Wave Propagation in Turbulence Evaluation and Application of Mellin Transforms. , 2nd ed.,  SPIE Press ,  Bellingham, Washington  (2007).
Andrews  L. C., and Phillips  R. L., Laser Beam Propagation through Random Media. , 2nd ed.,  SPIE Press ,  Bellingham, Washington  (2005).
Tyson  R. H., Principles of Adaptive Optics. , 4th ed.,  CRC Press ,  Boca Raton, Florida  (2016).
Poon  T.-C., and Liu  J.-P., Introduction to Modern Digital Holography with MATLAB. ,  Cambridge University Press ,  New York, New York  (2014).
Thurman  S. T., and Bratcher  A., “Multiplexed synthetic-aperture digital holography,” Appl. Opt.. 54, (3 ), 559 –568 (2015).CrossRef
Merritt  P., Beam Control for Laser Systems. ,  Directed Energy Professional Society ,  Albuquerque, New Mexico  (2012).
Muller  R. A., and Buffington  A., “Real-time correction of atmospherically degraded telescope images through image sharpening,” J. Opt. Soc. Am.. 64, (9 ), 1200 –1210 (1974). 0030-3941 CrossRef
Fienup  J. R., and Miller  J. J., “Aberration correction by maximizing generalized sharpness metrics,” J. Opt. Soc. Am. A. 20, (4 ), 609 –620 (2003). 0740-3232 CrossRef
Miller  N. J., , Dierking  M. P., and Duncan  B. D., “Optical sparse aperture imaging,” Appl. Opt.. 46, (23 ), 5933 –5943 (2007).CrossRef
Thurman  S. T., and Fienup  J. R., “Phase-error correction in digital holography,” J. Opt. Soc. Am. A. 25, (4 ), 983 –994 (2008). 0740-3232 CrossRef
Thurman  S. T., and Fienup  J. R., “Correction of anisoplanatic phase errors in digital holography,” J. Opt. Soc. Am. A. 25, (4 ), 995 –999 (2008). 0740-3232 CrossRef
Tippie  A. E., and Fienup  J. R., “Phase-error correction for multiple planes using a sharpness metric,” Opt. Lett.. 34, (5 ), 701 –703 (2009). 0146-9592 CrossRef
Tippie  A. E., and Fienup  J. R., “Multiple-plane anisoplanatic phase correction in a laboratory digital holography experiment,” Opt. Lett.. 35, (19 ), 3291 –3293 (2010). 0146-9592 CrossRef
Rabb  D.  et al., “Distributed aperture synthesis,” Opt. Exp.. 18, (10 ), 10334 –10342 (2010). 1094-4087 CrossRef
Rabb  D. J.  et al., “Multi-transmitter aperture synthesis,” Opt. Exp.. 18, (24 ), 24937 –24945 (2010). 1094-4087 CrossRef
Rabb  D. J., , Stafford  J. W., and Jameson  D. F., “Non-iterative aberration correction of a multiple transmitter system,” Opt. Exp.. 19, (25 ), 25048 –25056 (2011). 1094-4087 CrossRef
Gunturk  B. G., , Rabb  D. J., and Jameson  D. F., “Multi-transmitter aperture synthesis with Zernike based aberration correction,” Opt. Exp.. 20, (24 ), 26448 –26457 (2012). 1094-4087 CrossRef
Kraczek  J. R., , McManamon  P. F., and Watson  E. A., “High resolution non-iterative aperture synthesis,” Opt. Exp.. 24, (6 ), 6229 –6239 (2016). 1094-4087 CrossRef
Marron  J. C.  et al., “Atmospheric turbulence correction using digital-holographic detection: experimental results,” Opt. Exp.. 17, (14 ), 11638 –11651 (2009). 1094-4087 CrossRef
Marron  J. C.  et al., “Extended-range digital holographic imaging,” Proc. SPIE. 7684, , 76841J  (2010).CrossRef
Marron  J. C., and Schroeder  K. S., “Holographic laser radar,” Opt. Lett.. 18, (5 ), 385 –387 (1993). 0146-9592 CrossRef
Tippie  A. E., , Kumar  A., and Fienup  J. R., “High-resolution synthetic-aperture digital holography with digital phase and pupil correction,” Opt. Exp.. 19, (13 ), 12027 –12038 (2011). 1094-4087 CrossRef
Osten  W.  et al., “Recent advances in digital holography [invited],” Appl. Opt.. 53, (27 ), G44 –G63 (2014).CrossRef
Doval  F.  et al., “Propagation of the measurement uncertainty in Fourier transform digital holographic interferometry,” Opt. Eng.. 55, (12 ), 121709  (2016).CrossRef
Barchers  J. D., , Fried  D. L., and Link  D. J., “Evaluation of the performance of Hartmann sensors in strong scintillation,” Appl. Opt.. 41, (6 ), 1012 –1021 (2002).CrossRef
Fried  D. L., “Branch point problem in adaptive optics,” J. Opt. Soc. Am. A. 15, (10 ), 2759 –2768 (1998). 0740-3232 CrossRef
Ghiglia  D. C., and Pritt  M. D., Two-Dimensional Phase Unwrapping Theory, Algorithms, and Software. ,  John Wiley and Sons ,  New York, New York  (1998).
Gonglewski  J. D.  et al., “Coherent image synthesis from wave-front sensor measurements of a nonimaged laser speckle field: a laboratory demonstrations,” Opt. Lett.. 16, (23 ), 1893 –1895 (1991). 0146-9592 CrossRef
Arrasmith  W. W., “Branch-point-tolerant least-squares phase reconstructor,” J. Opt. Soc. Am. A. 16, (7 ), 1864 –1872 (1999). 0740-3232 CrossRef
Venema  T. M., and Schmidt  J. D., “Optical phase unwrapping in the presence of branch points,” Opt. Exp.. 16, (10 ), 6985 –6998 (2008). 1094-4087 CrossRef
Steinbock  M. J., , Hyde  M. W., and Schmidt  J. D., “LSPV+7, a branch-point-tolerant reconstructor for strong turbulence adaptive optics,” Appl. Opt.. 53, (18 ), 3821 –3831 (2014).CrossRef
Spencer  M. F.  et al., “Deep-turbulence simulation in a scaled-laboratory environment using five phase-only spatial light modulators,” in  Proc. 18th Coherent Laser Radar Conf.  (2016).
Nielson  P. E., Effects of Directed Energy Weapons. ,  Directed Energy Professional Society ,  Albuquerque, New Mexico  (2009).
Perram  G. P.  et al., An Introduction to Laser Weapon Systems. ,  Directed Energy Professional Society ,  Albuquerque, New Mexico  (2010).
Barchers  J. D., and Rhoadarmer  T. A., “Evaluation of phase-shifting approaches for a point-diffraction interferometer with the mutual coherence function,” Appl. Opt.. 41, (36 ), 7499 –7509 (2002).CrossRef
Rhoadarmer  T. A., “Development of a self-referencing interferometer wavefront sensor,” Proc. SPIE. 5553, , 112  (2004). 0277-786X CrossRef
Spencer  M. F.  et al., “Digital holography wave-front sensing in the presence of strong atmospheric turbulence and thermal blooming,” Proc. SPIE. 9617, , 961705  (2015). 0277-786X CrossRef
Schmidt  J. D., Numerical Simulation of Optical Wave Propagation. ,  SPIE Press ,  Bellingham, Washington  (2010).
Voelz  D. G., Computational Fourier Optics: a MATLAB Tutorial. ,  SPIE Press ,  Bellingham, Washington  (2010).
Brennan  T. J., and Roberts  P. H., AOTools the Adaptive Optics Toolbox for Use with MATLAB User’s Guide Version 1.4. ,  the Optical Sciences Company ,  Anaheim, California  (2010).
Brennan  T. J., , Roberts  P. H., and Mann  D. C., WaveProp a Wave Optics Simulation System for Use with MATLAB User’s Guide Version 1.3. ,  the Optical Sciences Company ,  Anaheim, California  (2010).
Gaskill  J. D., Linear Systems, Fourier Transforms, and Optics. ,  John Wiley and Sons ,  New York, New York  (1978).
Tyo  J. S., and Alenin  A. S., Field Guide to Linear Systems in Optics. ,  SPIE Press ,  Bellingham, Washington  (2015).
Saleh  B. E. A., and Teich  M. C., Fundamentals of Photonics. , 2nd ed.,  John Wiley and Sons ,  New York, New York  (2007).
Dereniak  E. L., and Boreman  G. D., Infrared Detectors and Systems. ,  John Wiley and Sons ,  New York, New York  (1996).

Mark F. Spencer is a research physicist at the Air Force Research Laboratory, Directed Energy Directorate. He is also an assistant adjunct professor of optical sciences and engineering (OSE) at the Air Force Institute of Technology (AFIT), Department of Engineering Physics. He is a senior member of SPIE and received his BS in physics from the University of Redlands in 2008 and his MS and PhD in OSE from AFIT in 2011 and 2014, respectively.

Robert A. Raynor received his master’s degree in applied physics from the AFIT. He currently works as a research physicist at the Air Force Research Laboratory, Directed Energy Directorate. His research efforts concentrate on developing models for wavefront sensors that use digital-holographic detection and tracking sensors that use partially coherent illumination of optically rough targets.

Matthias T. Banet is an undergraduate student at the New Mexico Institute of Mining and Technology in Socorro, New Mexico. He is currently a summer intern at the Air Force Research Laboratory, Directed Energy Directorate. This fall, he will receive his BS degree in physics and minors in materials science and mathematics. Upon the completion of his undergraduate studies, he plans to pursue a PhD in optical sciences and engineering.

Dan K. Marker is currently in his 27th year of employment with the Air Force Research Laboratory, Directed Energy Directorate. His research efforts concentrate on the development of phased array imaging, phased array beam projection, optical quality polymer films, and tiled-array laser systems. He received his master’s degrees in mechanical engineering from the University of New Mexico and an MBA in finance from Webster University. He is currently the vice president of the Directed Energy Professional Society.

© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.

Citation

Mark F. Spencer ; Robert A. Raynor ; Matthias T. Banet and Dan K. Marker
"Deep-turbulence wavefront sensing using digital-holographic detection in the off-axis image plane recording geometry", Opt. Eng. 56(3), 031213 (Oct 31, 2016). ; http://dx.doi.org/10.1117/1.OE.56.3.031213


Figures

Graphic Jump Location
Fig. 1
F1 :

A description of digital-holographic detection in the off-axis IPRG. Here, a highly coherent master-oscillator (MO) laser is split into two optical trains. The first optical train actively illuminates an unresolved cooperative object. Analogously, the second optical train creates an off-axis local oscillator (LO), so that tilted-spherical-wave illumination is incident on an FPA. The spherical-wave reflections from an unresolved cooperative object then back propagate through deep-turbulence conditions, and upon being imaged onto the FPA coherently interfere with the tilted-spherical-wave illumination from the off-axis LO. In turn, the recorded interference pattern on the FPA is known as a digital hologram, and upon taking a 2-D IFFT, we can obtain an estimate of the wrapped phase (and amplitude) that exists in the exit-pupil plane of the imaging system.

Graphic Jump Location
Fig. 2
F2 :

A description of the experimental parameter space used within the computational wave optics experiments.

Graphic Jump Location
Fig. 3
F3 :

(a, b) The normalized signal (c, d) and normalized digital hologram, in the image plane for a constant SNR, where the analytical SNR is 20. As the diffraction-limited sampling quotient, QI, increases, the number of FPA pixels contained within the diffraction-limited imaging diameter, D2 (white circles), increases proportionally. Note that the results presented here contain no aberrations.

Graphic Jump Location
Fig. 4
F4 :

(a, b) The wrapped phase and (c, d) normalized amplitudes associated with the Fourier plane for a constant SNR, where the analytical SNR is 20 and the numerical SNR is 21.5. In general, the Fourier plane contains the complex-field estimate (i.e., an estimate of the amplitude and wrapped phase that exists in the exit-pupil plane of the imaging system). The results show that as the diffraction-limited sampling quotient, QI, increases, the complex-field estimates contained within an exit-pupil diameter, D1 (white circles), take up less and less space within the Fourier plane because the side length of the Fourier plane, QID1, increases proportionally.

Graphic Jump Location
Fig. 5
F5 :

(a) The wrapped-phase truth and (b–d) wrapped-phase estimates for a constant SNR, where the analytical SNR is 20 and the numerical SNR is 21.5. In general, by windowing out the appropriate data in the Fourier plane (white circles in Fig. 4), we obtain the complex-field estimate (i.e., an estimate of the amplitude and wrapped phase that exists in the exit-pupil plane of the imaging system). The results contained in (a–d) show that as the diffraction-limited sampling quotient, QI, increases, the field-estimated Strehl ratio, SF, decreases ever so slightly.

Graphic Jump Location
Fig. 6
F6 :

The average percentage error as a function of the analytical SNR for the deep-turbulence trade space presented in Table 1. Here, the results show that as the analytical SNR increases, the average percentage error decreases between the numerical and analytical SNRs. Note that the error bars depict the width of the standard deviation for 400 realizations.

Graphic Jump Location
Fig. 7
F7 :

The average field-estimated Strehl ratio, SF, as a function of the diffraction-limited sampling quotient, QI, for the deep-turbulence trade space presented in Table 1. Here, the analytical SNR increases from 1 in (a) to 10, 20, and 100 in (b–d), respectively. The results contained in (a) and (b) show that as the diffraction-limited sampling quotient, QI, increases, the average field-estimated Strehl ratio, SF, decreases (i.e., for low SNRs, lower QI’s perform better). In contrast, the results contained in (c) and (d) show that as the diffraction-limited sampling quotient, QI, increases, the average field-estimated Strehl ratio, SF, increases (i.e., for high SNRs, higher QI’s perform better). Note that the error bars depict the width of the standard deviation for 400 realizations.

Tables

Table Grahic Jump Location
Table 1The deep-turbulence trade space of interest in this paper. Remember that the log-amplitude variance σχ2, which is also referred to as the Rytov number, gives a measure for the strength of the scintillation. As the σχ2 grows above 0.25 (for a spherical wave), scintillation gives rise to branch points in the phase function. Also remember that the coherence diameter r0, which is also referred to as the Fried parameter, gives a measure for the achievable imaging resolution. As the ratio of exit-pupil diameter D1 to r0 grows above 4 (for a spherical wave), higher-order aberrations beyond tilt start to limit the achievable imaging resolution. Here, D1=30cm.

References

Sasiela  R. J., Electromagnetic Wave Propagation in Turbulence Evaluation and Application of Mellin Transforms. , 2nd ed.,  SPIE Press ,  Bellingham, Washington  (2007).
Andrews  L. C., and Phillips  R. L., Laser Beam Propagation through Random Media. , 2nd ed.,  SPIE Press ,  Bellingham, Washington  (2005).
Tyson  R. H., Principles of Adaptive Optics. , 4th ed.,  CRC Press ,  Boca Raton, Florida  (2016).
Poon  T.-C., and Liu  J.-P., Introduction to Modern Digital Holography with MATLAB. ,  Cambridge University Press ,  New York, New York  (2014).
Thurman  S. T., and Bratcher  A., “Multiplexed synthetic-aperture digital holography,” Appl. Opt.. 54, (3 ), 559 –568 (2015).CrossRef
Merritt  P., Beam Control for Laser Systems. ,  Directed Energy Professional Society ,  Albuquerque, New Mexico  (2012).
Muller  R. A., and Buffington  A., “Real-time correction of atmospherically degraded telescope images through image sharpening,” J. Opt. Soc. Am.. 64, (9 ), 1200 –1210 (1974). 0030-3941 CrossRef
Fienup  J. R., and Miller  J. J., “Aberration correction by maximizing generalized sharpness metrics,” J. Opt. Soc. Am. A. 20, (4 ), 609 –620 (2003). 0740-3232 CrossRef
Miller  N. J., , Dierking  M. P., and Duncan  B. D., “Optical sparse aperture imaging,” Appl. Opt.. 46, (23 ), 5933 –5943 (2007).CrossRef
Thurman  S. T., and Fienup  J. R., “Phase-error correction in digital holography,” J. Opt. Soc. Am. A. 25, (4 ), 983 –994 (2008). 0740-3232 CrossRef
Thurman  S. T., and Fienup  J. R., “Correction of anisoplanatic phase errors in digital holography,” J. Opt. Soc. Am. A. 25, (4 ), 995 –999 (2008). 0740-3232 CrossRef
Tippie  A. E., and Fienup  J. R., “Phase-error correction for multiple planes using a sharpness metric,” Opt. Lett.. 34, (5 ), 701 –703 (2009). 0146-9592 CrossRef
Tippie  A. E., and Fienup  J. R., “Multiple-plane anisoplanatic phase correction in a laboratory digital holography experiment,” Opt. Lett.. 35, (19 ), 3291 –3293 (2010). 0146-9592 CrossRef
Rabb  D.  et al., “Distributed aperture synthesis,” Opt. Exp.. 18, (10 ), 10334 –10342 (2010). 1094-4087 CrossRef
Rabb  D. J.  et al., “Multi-transmitter aperture synthesis,” Opt. Exp.. 18, (24 ), 24937 –24945 (2010). 1094-4087 CrossRef
Rabb  D. J., , Stafford  J. W., and Jameson  D. F., “Non-iterative aberration correction of a multiple transmitter system,” Opt. Exp.. 19, (25 ), 25048 –25056 (2011). 1094-4087 CrossRef
Gunturk  B. G., , Rabb  D. J., and Jameson  D. F., “Multi-transmitter aperture synthesis with Zernike based aberration correction,” Opt. Exp.. 20, (24 ), 26448 –26457 (2012). 1094-4087 CrossRef
Kraczek  J. R., , McManamon  P. F., and Watson  E. A., “High resolution non-iterative aperture synthesis,” Opt. Exp.. 24, (6 ), 6229 –6239 (2016). 1094-4087 CrossRef
Marron  J. C.  et al., “Atmospheric turbulence correction using digital-holographic detection: experimental results,” Opt. Exp.. 17, (14 ), 11638 –11651 (2009). 1094-4087 CrossRef
Marron  J. C.  et al., “Extended-range digital holographic imaging,” Proc. SPIE. 7684, , 76841J  (2010).CrossRef
Marron  J. C., and Schroeder  K. S., “Holographic laser radar,” Opt. Lett.. 18, (5 ), 385 –387 (1993). 0146-9592 CrossRef
Tippie  A. E., , Kumar  A., and Fienup  J. R., “High-resolution synthetic-aperture digital holography with digital phase and pupil correction,” Opt. Exp.. 19, (13 ), 12027 –12038 (2011). 1094-4087 CrossRef
Osten  W.  et al., “Recent advances in digital holography [invited],” Appl. Opt.. 53, (27 ), G44 –G63 (2014).CrossRef
Doval  F.  et al., “Propagation of the measurement uncertainty in Fourier transform digital holographic interferometry,” Opt. Eng.. 55, (12 ), 121709  (2016).CrossRef
Barchers  J. D., , Fried  D. L., and Link  D. J., “Evaluation of the performance of Hartmann sensors in strong scintillation,” Appl. Opt.. 41, (6 ), 1012 –1021 (2002).CrossRef
Fried  D. L., “Branch point problem in adaptive optics,” J. Opt. Soc. Am. A. 15, (10 ), 2759 –2768 (1998). 0740-3232 CrossRef
Ghiglia  D. C., and Pritt  M. D., Two-Dimensional Phase Unwrapping Theory, Algorithms, and Software. ,  John Wiley and Sons ,  New York, New York  (1998).
Gonglewski  J. D.  et al., “Coherent image synthesis from wave-front sensor measurements of a nonimaged laser speckle field: a laboratory demonstrations,” Opt. Lett.. 16, (23 ), 1893 –1895 (1991). 0146-9592 CrossRef
Arrasmith  W. W., “Branch-point-tolerant least-squares phase reconstructor,” J. Opt. Soc. Am. A. 16, (7 ), 1864 –1872 (1999). 0740-3232 CrossRef
Venema  T. M., and Schmidt  J. D., “Optical phase unwrapping in the presence of branch points,” Opt. Exp.. 16, (10 ), 6985 –6998 (2008). 1094-4087 CrossRef
Steinbock  M. J., , Hyde  M. W., and Schmidt  J. D., “LSPV+7, a branch-point-tolerant reconstructor for strong turbulence adaptive optics,” Appl. Opt.. 53, (18 ), 3821 –3831 (2014).CrossRef
Spencer  M. F.  et al., “Deep-turbulence simulation in a scaled-laboratory environment using five phase-only spatial light modulators,” in  Proc. 18th Coherent Laser Radar Conf.  (2016).
Nielson  P. E., Effects of Directed Energy Weapons. ,  Directed Energy Professional Society ,  Albuquerque, New Mexico  (2009).
Perram  G. P.  et al., An Introduction to Laser Weapon Systems. ,  Directed Energy Professional Society ,  Albuquerque, New Mexico  (2010).
Barchers  J. D., and Rhoadarmer  T. A., “Evaluation of phase-shifting approaches for a point-diffraction interferometer with the mutual coherence function,” Appl. Opt.. 41, (36 ), 7499 –7509 (2002).CrossRef
Rhoadarmer  T. A., “Development of a self-referencing interferometer wavefront sensor,” Proc. SPIE. 5553, , 112  (2004). 0277-786X CrossRef
Spencer  M. F.  et al., “Digital holography wave-front sensing in the presence of strong atmospheric turbulence and thermal blooming,” Proc. SPIE. 9617, , 961705  (2015). 0277-786X CrossRef
Schmidt  J. D., Numerical Simulation of Optical Wave Propagation. ,  SPIE Press ,  Bellingham, Washington  (2010).
Voelz  D. G., Computational Fourier Optics: a MATLAB Tutorial. ,  SPIE Press ,  Bellingham, Washington  (2010).
Brennan  T. J., and Roberts  P. H., AOTools the Adaptive Optics Toolbox for Use with MATLAB User’s Guide Version 1.4. ,  the Optical Sciences Company ,  Anaheim, California  (2010).
Brennan  T. J., , Roberts  P. H., and Mann  D. C., WaveProp a Wave Optics Simulation System for Use with MATLAB User’s Guide Version 1.3. ,  the Optical Sciences Company ,  Anaheim, California  (2010).
Gaskill  J. D., Linear Systems, Fourier Transforms, and Optics. ,  John Wiley and Sons ,  New York, New York  (1978).
Tyo  J. S., and Alenin  A. S., Field Guide to Linear Systems in Optics. ,  SPIE Press ,  Bellingham, Washington  (2015).
Saleh  B. E. A., and Teich  M. C., Fundamentals of Photonics. , 2nd ed.,  John Wiley and Sons ,  New York, New York  (2007).
Dereniak  E. L., and Boreman  G. D., Infrared Detectors and Systems. ,  John Wiley and Sons ,  New York, New York  (1996).

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Related Book Chapters

Topic Collections

Advertisement
  • Don't have an account?
  • Subscribe to the SPIE Digital Library
  • Create a FREE account to sign up for Digital Library content alerts and gain access to institutional subscriptions remotely.
Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).
Access This Proceeding
Sign in or Create a personal account to Buy this article ($15 for members, $18 for non-members).
Access This Chapter

Access to SPIE eBooks is limited to subscribing institutions and is not available as part of a personal subscription. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.