Special Section on Practical Holography: New Procedures, Materials, and Applications

Extended focus imaging in digital holographic microscopy: a review

[+] Author Affiliations
Marcella Matrecano

CNR-Istituto Nazionale di Ottica, Via Campi Flegrei 34, 80078 Pozzuoli (NA), Italy

Melania Paturzo

CNR-Istituto Nazionale di Ottica, Via Campi Flegrei 34, 80078 Pozzuoli (NA), Italy

Pietro Ferraro

CNR-Istituto Nazionale di Ottica, Via Campi Flegrei 34, 80078 Pozzuoli (NA), Italy

Opt. Eng. 53(11), 112317 (Jul 17, 2014). doi:10.1117/1.OE.53.11.112317
History: Received February 17, 2014; Revised June 12, 2014; Accepted June 23, 2014
Text Size: A A A

Open Access Open Access

Abstract.  The microscope is one of the most useful tools for exploring and measuring the microscopic world. However, it has some restrictions in its applications because the microscope’s depth of field (DOF) is not sufficient for obtaining a single image with the necessary magnification in which the whole longitudinal object volume is in focus. Currently, the answer to this issue is the extended focused image. Techniques proposed over the years to overcome the limited DOF constraint of the holographic systems and to obtain a completely in-focus image are discussed. We divide them in two macro categories: the first one involves methods used to reconstruct three-dimensional generic objects (including techniques inherited from traditional microscopy, such as the sectioning and merging approach, or multiplane imaging), while the second area involves methods for objects recorded on a tilted plane with respect to hologram one (including not only the use of reconstruction techniques and rotation matrices, but also the introduction of a numerical cubic phase plate or hologram deformations). The aim is to compare these methods and to show how they work under the same conditions, proposing different applications for each.

The microscope is one of the most useful tools for exploring and measuring the microscopic world, and its power quickly became clear to its discoverers. The microscope allows small objects to be imaged with very large magnifications. At the same time, it is clear that there is a trade-off: imaging very small objects brings a reduced depth of focus. That means that for higher magnification of the microscope objective, the corresponding in-focus imaged volume of the object is thinner along the optical axis.

In fact, the microscope’s depth of field (DOF), depending on different conditions of use, is not sufficient to obtain a single image in which the whole longitudinal volume of the object is in-focus. If an accurate analysis of the whole object has to be performed, it is necessary to have a single sharp image in which all of the object’s details are still in focus, even if they are located at different planes along the longitudinal direction.

Even when exploring an object having a three-dimensional (3-D) complex shape with high magnification, it is necessary to change the distance between the object and the microscope objective; doing so allows one to focus different portions of the object located on different image planes. Many scientists, using microscopes in different areas of research, are very aware of the intrinsic limitation of microscopes. In fact, in the community of microscopists, to have a single image with the necessary magnification but in which the entire object is in focus is highly desirable.

This necessity has motivated many research efforts aimed at overcoming the aforementioned problems. Currently, the solution to this issue goes under the name of the extended focused image (EFI) and many solutions have been proposed.

In traditional microscopy, EFI is composed by selecting different portions that are in sharp focus for each image, from a stack of numerous images recorded at different distances between the microscope objective and the objects.17 Modern microscopes are equipped with micrometric mechanical translators actuated by piezoelectric elements. The microscope objective is moved along the optical axis between the highest and lowest points of the objects with a desired and opportune number of steps. Essentially, what is performed is a mechanical scanning of the microscope to image the object at a discrete number of planes across all the volume it occupies.

For each longitudinal step, an image is recorded and stored in a computer and linked with information of the depth at which it has been taken. The in-focus portion of each image is identified through some appropriate parameter, for example, the contrast measurement.2 Once these parts are identified, they are added to produce a composite EFI. In practice, the portion of the object from each image that appears to be or is numerically recognized as being in good focus is extracted by means of numerical algorithms.5,7 Then the different portions are composed together to give single images in which all details are in focus. In the EFI, all points of an object are in focus independent of their height in the topography of the object.6 Of course, the smaller the stepping increments that are performed in the mechanical scanning, the more accurate is the EFI result.

On the negative side, the time taken for the acquisition increases with more steps and more calculation is needed to obtain the EFI. The time for for accurate and precise movements for single image acquisition over the entire programmed range essentially depends on the piezoactuator characteristic response time. Typically, it is difficult to have <0.10s for acquisition of a single image. Even if the computing time is not a problem, the length of the acquisition process poses a severe limitation on obtaining an EFI for dynamic objects.

An alternative investigated solution is based on the use of a specially designed phase plate to use in the optical path of the microscope. This allows a depth of focus extension in images observable by a microscope.812 The phase plate introduces aberrations on the incoming optical rays at the expense of some distortion and a blurring effect, but is capable of extending the depth of the focus. This method is called the wave front coding approach and has a severe drawback: a phase plate must be specifically designed and fabricated as a function of the object under investigation and of the adopted optical system.

The important necessity of having an EFI can be satisfied, in principle, by holography. In fact, this technique has a unique attribute that allows recording and reconstruction of the amplitude and the phase of a coherent wave front that has been reflectively scattered or transmitted by an object through an interference process. The reconstruction process allows the entire volume to be imaged. Indeed, a very important advantage that is a result of using holography is that only one image has to be recorded. Subsequently, the whole volume can be scanned during the reconstruction process after the hologram has already been recorded.

Furthermore, in this case, dynamic events can be studied and the EFI of a dynamic process can be obtained on the basis of using sequentially recorded holograms.

In this work, techniques proposed over the years to overcome the limited DOF constraint of the holographic systems and to obtain a completely in-focus representation of the objects are discussed and compared.

In Sec. 1, the theoretical principles of digital holography (DH) are briefly discussed, giving the readers an adequate background and a standardized knowledge of the symbology.

In Sec. 2, the EFI construction methods are discussed. For the sake of simplicity, we divide them into two macro categories by application type.

The first one involves methods used to reconstruct 3-D generic objects. This includes techniques inherited from traditional microscopy, such as the sectioning and merging approach, or multiplane imaging, which is to simultaneously visualize several layers within the imaged volume. Other approaches are also described, such as 3-D deconvolution methods that allow rebuilding of the true 3-D object distribution.

The second macro area involves methods for objects recorded on a plane tilted with respect to hologram one. This case has raised great interest over the years, because its applications in several fields and many strategies have been proposed. Most strategies include the use of reconstruction techniques and rotation matrices, but the introduction of numerical cubic phase plate or hologram deformations are also described.

In Sec. 3, some of the defined techniques are illustrated with clear examples. In particular, for each macro area, some methods are compared experimentally with practical applications on digital holograms.

General Principles

Holography is a method that allows reconstruction of whole optical wave fields. A hologram, therefore, is something that records all of the information available in a beam of light including the phase of the light, not just the amplitude as in traditional photography. The holographic process takes place in two stages: the recording of an image and the wave field reconstruction.

Holography requires the use of coherent illumination and introduces a reference beam derived from the same source. The light waves are scattered by the object under test and a reference wave interferes in the hologram plane with the in-line or off-axis geometry. Since the intensity at any point in this interference pattern also depends on the phase of the object wave, the resulting recording (the hologram) contains information on the phase as well as the amplitude of the object wave. If the hologram is illuminated again with the original reference wave, a virtual and a real image of the object are reconstructed.

In DH, the photographic plate is replaced by a digital device like a charged-couple device (CCD) camera; the reconstruction process is performed by multiplication of the stored digital hologram with the numerical description of the reference wave and by the convolution of the result with the impulse response function. While the recording step is basically an interference process, the reconstruction can be explained by diffraction theory.

Figure 1 shows the geometry in which the z axis is the optical axis. The hologram is positioned in the (ξ,η) plane where z=0, while (x,y) is the object plane at z=d (d>0) and (x,y) is an arbitrary plane of observation at z=d. All these planes are normal to the optical axis.

Graphic Jump LocationF1 :

Geometry for digital recording and numerical reconstruction.

The diffracted field in the image plane is given by the Rayleigh-Sommerfeld diffraction formula Display Formula

b(x,y)=1iλh(ξ,η)r(ξ,η)eikρρcosΩdξdη,(1)
where the integration is carried out over the hologram surface, and Display Formula
ρ=d2+(xξ)2+(yη)2(2)
is the distance from a given point in the hologram plane to a point of observation and d is the reconstruction distance, i.e., the distance backward measured from the hologram plane (ξ,η) to the image plane (x,y), h(ξ,η) is the recorded hologram, r(ξ,η) represents the reference wave field, k denotes the wave number, and λ is the wavelength of the laser source. The quantity cosΩ is an obliquity factor13 normally set to 1 because of small angles. Equation (1) represents a complex wave field with intensity and phase distributions I and ψ given by Display Formula
I(x,y)=b(x,y)b*(x,y);Ψ(x,y)=arctanI{b(x,y)}R{b(x,y)}.(3)

I{b} and R{b} denote the imaginary and real parts of b, respectively, and * denotes the conjugate operator.

Different approaches of implementing Eq. (1) in a computer have been proposed.14 Most of them convert Rayleigh-Sommerfeld’s diffraction integral into one or more Fourier transforms, which make the numerical implementation easy because several fast Fourier transform (FFT) algorithms are available for efficient computations.

Reconstruction Methods
Discrete Fresnel transformation

In the Fresnel approximation, the factor ρ is replaced by the distance d in the denominator of Eq. (1) and the square root in the argument of the exponential function is replaced by the first terms of a binomial expansion. When terms of higher order than the first two are excluded, ρ becomes Display Formula

ρd[1+12(xξ)2d2+12(yη)2d2].(4)

Since ρ appear in the exponent, neglecting higher-order terms than first one, represents very small phase errors. A sufficient condition13 is that the distance d is large enough. Display Formula

d3π4λ[(xξ)2+(yη)2]max2.(5)

Since this is an overly stringent condition, even shorter distances produce accurate results. Since the exponent is the most critical factor, dropping all terms but the first in the denominator produces only acceptable errors only. Thus, the propagation integral in Eq. (1) becomes Display Formula

b(x,y)=1iλdh(ξ,η)r(ξ,η)eikd[1+(xξ)22d2+(yη)22d2]dξdη,(6)
which represents a parabolic approximation of spherical waves. With these approximations, Eq. (1) takes the form Display Formula
b(x,y)=eiπdλ(ν2+μ2)h(ξ,η)r(ξ,η)g(ξ,η)e2iπ[(ξνημ)]dξdη,(7)
where the quadratic phase function g(ξ,η) is the impulse response. Display Formula
g(ξ,η)=ei2πdλiλdeiπλd(ξ2+η2),(8)
and ν=(x/dλ) and μ=(y/dλ) are the spatial frequencies.

The discrete finite form of Eq. (7) is obtained through the pixel size (Δξ,Δη) of the CCD array, which is different from that (Δx,Δy) in the image plane xy and is related as follows: Display Formula

Δx=dλMΔξΔy=dλNΔη,(9)
where M and N are the pixel numbers of the CCD array in x and y directions, respectively.

According to Eq. (7), the wave field b(x,y) is essentially determined by the two-dimensional (2-D) Fourier transform of the quantity h(ξ,η)r(ξ,η)g(ξ,η). For rapid numerical calculations, a discrete formulation of Eq. (4) involving a 2-D FFT algorithm is used, as shown in Display Formula

b(m,n,d)=ei2πdλiλdeiπλd(n2Δx2+m2Δy2)DFT{h(j,l)r(j,l)eiπdλ(j2Δξ2+l2Δη2)},(10)
where j, l, m, and n are integers (M/2j,mM/2), (N/2l,nN/2) and DFT{} denotes the discrete Fourier transform.

In the formulation based on Eq. (10), the reconstructed image is enlarged or contracted according to the depth d, see Eq. (9).

Reconstruction by the convolution approach

This is an alternative approach, useful for keeping the size of the reconstructed image constant.15 In this formulation, the wave field b(x,y,d) can be calculated by Display Formula

b(x,y)=h(ξ,η)r(ξ,η)f(ξ,η,x,y)dξdη,(11)
where Display Formula
f(xξ,yη)=1iλeikρρcosΩ1iλeikd2+(xξ)2+(yη)2d2+(xξ)2+(yη)2.(12)

Equation (12) shows that the linear system characterized by f(ξ,η,x,y)=f(ξx,ηy) is space-invariant: the integral in Eq. (16) is a convolution. This allows the application of the convolution theorem;13 thus, the wave field can be found as the inverse transform. Display Formula

b(x,y)=F1{F[h(ξ,η)r(ξ,η)]F[f(ξ,η)]}.(13)

With this method, the size of the reconstructed image does not change in respect to the hologram plane Δx=Δξ, Δy=Δη and it is necessary to have one Fourier transform and one inverse Fourier transform to obtain one 2-D reconstructed image at a distance d. Indeed, an analytical version of F{f} is readily available, saving one Fourier transform in Eq. (13).

Although the computational procedure is heavier in this case compared to the Fresnel approximation approach of Eq. (10), this method allows for easy comparison of the reconstructed images at different distances d, since the size does not change with modifying the reconstruction distance. Furthermore, in this case, we get an exact solution to the diffraction integral as long as the sampling Nyquist theorem is not violated.

Angular spectrum method

Another possible solution is to identify the complex field as a composition of plane waves traveling in different directions away from a plane.16 The propagated field across any other parallel plane can be calculated by adding the contributions of these plane waves, with different phase delays, depending on the plane wave’s angle of propagation.

In other words, if the angular spectrum is defined as the Fourier transform of the complex wave field at plane z=0Display Formula

A(u,v;0)=a(ξ,η,0)ej2π(uξ+vη)dξdη=F{a(ξ,η,0)}==F{h(ξ,η)r(ξ,η)},(14)
with u and v the corresponding spatial frequencies of ξ and η; the angular spectrum A(u,v;z) along z=d can be calculated by multiplying A(u,v;0) by the transfer function of free-space propagation.17,18Display Formula
A(u,v;d)=A(u,v;0)ej2πwd,(15)
with w=w(u,v)=[(λ2u2v2)]1/2 and λ is the wavelength used. At this point, the reconstructed complex wave field at any parallel plane at z=d axis is found by Display Formula
b(x,y,d)=F1{A(u,v;0)ej2πwd}=F1{F{a(x,y,0)}ej2πwd}.(16)

This planes-waves decomposition approach presents many attractive features: it does not require any Rayleigh-Sommerfeld diffraction integral approximations, and, in this case, fast numerical implementations can be used.

As extensively discussed above, in DH, the reconstruction process is performed numerically by processing the digital hologram. It is modeled as the interference process between the diffracted field from the object and a reference beam at the CCD camera. The use of the Rayleigh-Sommerfield diffraction formula [see Eq. (1)] allows us to reconstruct the whole wave field, in amplitude and phase, backwards from the CCD array at any image plane in the studied volume. Due to the fact that the reconstruction of a single digital hologram is fully numeric, reconstructions at different image planes can be performed along the longitudinal axis (z axis) by changing the distance of back propagation in the modeled diffraction integral from a single hologram recorded experimentally.

This unique feature was initially exploited by Haddad et al.19 in holographic microscopy, but it was quickly appreciated by many people. In fact, researchers have realized that with digital reconstruction, accurate mechanical adjustment to find the focal plane is no longer necessary since the image at any distance can be numerically calculated.

Furthermore, compared to classic microscopy, digital holographic microscopy also benefits from other advantages. For example, a satisfying reconstruction can, therefore, be performed even in the case of time evolution of the object, and the reconstruction step distance can be made as small as needed because no mechanical movement is involved.

Unfortunately, as with many imaging systems, holographic microscopy suffers from a limited depth of focus which depend on the optical properties of the employed microscope objective. If the object under investigation has a 3-D shape, then at a fixed reconstruction distance d only some portion of the object will be in focus. Anyway, it is possible to obtain the entire object volume by reconstructing a number of image planes in the volume of interest along the z axis, and with the desired longitudinal resolution. In this way, the image stack of the entire volume can also be easily gained. Once obtained, the EFI can be constructed as in classical microscopy and the most used extended DOF algorithms can be employed in DH.

Nevertheless, the great advantage of the holographic technique is that it preserves the 3-D information, so, in principle, it should be possible to extract these data in some way and display them in a single in-focus image. In DH, the real challenge is to pull out and show the 3-D information directly rather than building it piece by piece.

Different strategies to achieve this goal exist. In this section, we will discuss the techniques proposed over the years to overcome the limited DOF constraint of the holographic optical systems and obtain a completely in-focus representation of the objects. For the sake of simplicity, we divide them in two categories by application type: the first one involves methods used to reconstruct 3-D generic objects; the second involves methods for objects recorded on a tilted plane.

3-D Generic Objects Recovering
Sectioning and merging approach

In holographic microscopy, the EFI concept has been extended by Ferraro et al.,20 who refer to the merged image from differently focused subareas as the extended focus image. They used the distance information, carried by the phase image, for correct selection of the in-focus portions that have to be selected from each image stack. This will result in the correct construction of the final EFI, provided that some solutions are adopted to control the size of the object independent of the reconstruction distance, and centering it by appropriately modeling the reference beam.21

In practice, they noted that phase map ψ(x,y) in DH incorporates information about the topographic profile of the object under investigation. In fact, the optical path difference (OPD) is related to the phase map by the following equation: Display Formula

OPD(x,y)=λ2πΨ(x,y).(17)

If p is the distance from the object lower point to the lens and q is the corresponding distance on the image plane, then any other point of the object at a different height Δp(x,y) results in a good focus at different imaging planes in front of the CCD according to the following simple relation: Display Formula

Δq(x,y)=M2Δp(x,y),(18)
where M is the magnification.

In a reflection configuration OPD(x,y)=2Δp(x,y), and taking into account Eqs. (17) and (18), they obtained the range of distances at which the digital hologram has to be reconstructed to image all the volume in focus: Display Formula

Δq(x,y)=M2Ψ(x,y)4πλ.(19)

Figure 2 represents the conceptual flow process to get the EFI from a digital hologram of a micro-electromechanical system (MEMS):

  1. recording the digital hologram
  2. reconstruction of the complex whole wave field from the hologram
  3. extraction of the phase map of the object from the complex wave field
  4. amplitude reconstruction of a stack of images of the entire volume from the lowest to the highest point in the profile of the object (adopting size controlling and centering)
  5. extraction of the EFI image from the stack of amplitude images on the basis of the phase map obtained by the previous point and according to Eq. (19).

Graphic Jump LocationF2 :

Conceptual flow chart describing how the extended focused image (EFI) is obtained by digital holography approach. Images are from 20.

Later, in addition to reflection configuration, Colomb et al.22 extended this scheme to transmission one. Furthermore, they generalized the application to other areas, such as metrology. For example, the method is employed on phase reconstructions of micro-optics (microlens and retroreflector, see Fig. 3), as well as on amplitude ones. They extracted the extended focus phase image from a stack of N reconstructions using a generalized reconstruction distance map. Display Formula

d(x,y)=scM2OPD(x,y)+d0,(20)
where sc=1 in reflection and sc=1 in transmission, and d0 is the longest reconstruction distance.

Graphic Jump LocationF3 :

Amplitude (1) and phase (2) reconstructions of a high aspect-ratio retroreflector immersed in distilled water measured in transmission for different reconstruction distances (a) 3.6 cm, (b) 6.6 cm, (c) 11.0 cm, and (d) EFI. Images are from 22.

Figure 3 presents the amplitude and phase reconstructions obtained for a high-aspect-ratio retroreflector, measured in transmission at λ=664nm and computed with different reconstruction distances. The reconstruction distance map is computed by adjusting the reconstruction distance d0=3.6cm to focus the retroreflector edges [Fig. 3(a)]. The EFI for the amplitude and phase are presented in Fig. 3(d). Ultimately, this method allows reconstruction of not only the extended focused amplitude images, but also, especially, the real topography for an object higher than the DOF of the microscope objective.

A typical drawback of this digital holographic EFI technology is that it works only with a single object with an axial dimension larger than the DOF.23 Instead, in case of multiple objects sparsely distributed in the space, or when the 3-D object shape is not continuous or slowly varies, such as step-like height structures, it has difficulty in automatically identifying multiple, unknown shaped targets and transferring them into their respective best focal position.

In this case, an algorithm able to recognize the presence of multiple targets should be used. It provides a chance to deal separately with these objects, and for each one, a map of heights is to be calculated. At this point, it refocuses each target, respectively, to their best focal planes and, finally, merges them back to form a high-precision 3-D shape result. Also, this type of technique belongs to the category of so-called sectioning and merging, and several attempts have been presented.

For example, by the independent component analysis technique or discrete wavelet transform, Do et al.2426 have synthetized an EFI from reconstructed holographic images of many 3-D objects at different in-focus distances. They achieved visual success. Nevertheless, their methods incurred the problem of blurring, since in the merge phase, more or less out-of-focus images are taken into account.

For optical scanning holography, some authors2729 have suggested modeling the task of sectioning as an inverse problem and Wiener filtering or iterative algorithms were implemented. Although these methods have reported remarkable results, they only worked for amplitude recovery. Holographic phase information was lost during processing so they cannot be used for purely phase objects.

A most effective approach is to separate the whole image into small blocks, as described in Refs. 3031 to 32. A focal measurement algorithm is applied to each individual block and the best focal position is calculated. EFI is sewn by taking the best focal positions for all blocks. When a large number of objects are present, such as small particles, this idea can be brought to the limit, assessing the best distance pixel-wise to obtain the depth map for each pixel of the image.33

Anyway, a possible critical point is the choice of focus detection criteria.

Typically, many reconstructed frames are collected along the axial direction, and the best focus plane is chosen by a certain kind of sharpness indicator. A number of various focus metrics have been proposed using an intensity gradient,34 self-entropy,35,36 gray-level variance,37 spectral l1 norms,38 wavelet theory,39 and stereo disparity,40 among others; for a comparison between these methods, see 41. The majority of focus-finding applications consist of looking for the amplitude extrema, even though in many cases it is phase contrast that is actually of interest. However, another problem arises when, in the examined block, there are not enough features (either presenting no object or being occupied by a whole object yet with no significant change during digital refocusing), which makes it difficult to find the exact focus plane with the focus detection algorithms.

Multiplane imaging

In many fields of science, such as imaging particle fields, in vivo microscopy, optical propagation studies, wavefront sensing, or medical imaging, multiplane imaging is very common and useful, allowing simultaneous visualization of several layers within the imaged volume.42 This is another way to preserve a wide DOF without sacrificing the axial resolution of the objective lens. In practice, the imaging path is multiplexed with beam splitters into multiple paths, each with a different focal length and its own camera for imaging.43 In this way, full axial resolution of the microscope objective is maintained in each of the recorded images. Nevertheless, this approach is quite impractical and has different limitations.

A different and smart approach has been proposed in the work of Blanchard and Greenaway44 in which a diffraction grating has been adopted in the optical setup to split the propagating optical field into three diffraction orders (i.e., 1, 0, +1). The grating was distorted with an opportune quadratic deformation and, consequently, the wave field resulting from each diffraction order could form an image of a different object plane. A further investigation was published some years later, in which the focusing properties of a diffraction grating having parabolic grooves has been exploited for extending the depth of focus.45,46 More recently, remarkable progress has been made in the use of a quadratic deformed grating for multiplane imaging of biological samples to demonstrate nanoparticle tracking with nanometer resolution along the optical axis.47

To confirm the high interest in multiplexing imaging, in 48, an approach named depth of field multiplexing is reported. A high-resolution spatial light modulator was adopted in a standard microscope to generate a set of superposed multifocal off-axis Fresnel lenses, which sharply image different focal planes. This approach provides simultaneous imaging of different focal planes in a sample using only a single camera exposure. The maximum number of imaged axial planes is further increased in 49 using colored RGB illumination and detection. In their paper, the authors have demonstrated the synchronous imaging of as many as 21 different planes in a single snapshot under certain conditions.

In DH, Paturzo and Finizio50 demonstrated that the synthetic diffraction grating can be included in the numerical reconstruction to simultaneously image three planes at different depths.

In practice, in the numerical reconstruction algorithm, the hologram is multiplied by the transmission function of a quadratically distorted grating. Display Formula

T(ξ,μ)=a+bcos[A(ξ2+μ2)+C(ξ+μ)],(21)
where a and b control the relative contrast between the images corresponding to the orders ±1 and the central one, A is the quadratic deformation, and C is the grating period.

The insertion of such a digital grating allows the simultaneous imaging of three object planes at different depths in the same field of view. In fact, the digital deformed grating has a focusing power in the nonzero orders and, therefore, acts as a set of three lens of positive, neutral, and negative powers. In the reconstruction plane, three replicas of the image appear; each one is associated with a diffraction order and has a different level of defocus. The distance from the object plane, corresponding to the i’th order, to that in the zeroth order is given by Display Formula

Δd(i)=2id2WN2Δξ2+2idW,(22)
where d is the reconstruction distance, N is the number of pixels of size Δξ, while W=AN2λ/2π is the defocus coefficient.

To demonstrate their technique, they performed different experiments. In the first case, three different wires were positioned at different distances from the CCD array of 100, 125, and 150 mm, respectively. A digital hologram was recorded in a lens-less configuration. They performed two numerical reconstructions of the corresponding hologram at 125 mm, the in-focus distance of the twisted wire, but with two different quadratic deformations of the numerical grating, that is two different values of the parameter A. Figure 4 shows the amplitudes of the obtained reconstructions.

Graphic Jump LocationF4 :

Numerical reconstructions of the “three wires” hologram at d=125mm, the in-focus distance of the twisted wire, with two different values of the numerical grating quadratic deformation in order to obtain: (a) the vertical wire in focus in the 1 order image and (b) the horizontal wire in focus in the +1 order image. Images are from 50.

As a further experiment they also applied the method to holograms of a biological sample. The specimen is formed by three in vitro mouse preadipocyte 3T3-F442A cells that are at different depths. Figure 5 shows the amplitude reconstruction at a distance d=105mm at which the cell indicated by the blue arrow is in focus (see the zeroth-order image). The +1 order corresponds to a distance d=92.7mm at which the cell indicated by the yellow arrow is in good focus, while the 1 order corresponds to a depth of d=121mm, where the filaments are visible (highlighted by the red ellipse in Fig. 5).

Graphic Jump LocationF5 :

Amplitude reconstruction of a “cells” hologram at a distance d=105mm at which the cell indicated by the blue arrow is in focus. The +1 order corresponds to a distance d=92.7mm at which the cell indicated by the yellow arrow is in good focus, while the 1 order corresponds to a depth of d=121mm where the filaments are well visible, highlighted by the red ellipse. Images are from 50.

The use of a numerical grating instead of a physical one, has the great advantage of increasing the flexibility of the system. For example, depending on the grating period and the amount of deformation, the distance of the multiple planes can be easily changed and adapted to the needs of the observer.

Moreover, they verified that the adoption of a deformed diffraction grating can be exploited in multiwavelength DH.

Afterward, Pan51 presents an angular spectrum method (ASM)-based reconstruction algorithm to simultaneously image multiple planes at different depths. A shift parameter is introduced in the diffraction integral kernel. It takes account of the coordinate system’s transverse displacement of the image plane at different depths. A combination of the diffraction integral kernel with different shift values and reconstruction depths yields multiplane imaging resolution in a single reconstruction. Furthermore, a method to extend the depth of focus using a single-shot digital hologram is also proposed.

3-D imaging

The very important advantage of DH is that all the 3-D information intrinsically contained in the digital hologram, can be usefully employed to construct a single image with all portions of a 3-D object in good focus. However, the question of the relationship between the 3-D distribution of the wave field and the configuration of the object is still not solved.

Consider the first case with a single wavelength and a single propagation direction of the illuminating wave (single k-vector): the reconstructed wavefront contains all contributions originating from all parts of the specimen and cannot be considered as the true 3-D image of the object. Indeed, the coherent source produces interferences with each of the reflected or transmitted waves or, more generally, diffracted waves coming from each part of the object.

The final image is then the superposition of the contributions from all the sections, in addition to the one where the wavefront is reconstructed (in-focus plan). The contributions of the upper and lower sections of the object (out-of-focus plans), therefore, appear as undesired contributions that blur the image. A major objective of the research is to adequately solve the problem of true 3-D object imaging by the elimination of all unwanted contributions.

In holographic microscopy, different strategies exist to solve this problem.

Initially, Onural52 extended the impulse function concept over a curve or a surface and he used it to improve the structure of the diffraction problem formulation, thus paving the way for elegant solutions of many associated problems. However, these require 3-D Fourier transforms, integrals over surface, and rotation matrices, making the problem numerically difficult to treat.

In Refs. 53 and 54, 3-D deconvolution methods with a point spread function (PSF) are extended to holographic reconstructions with the aim to rebuild the true 3-D distribution of small particles. Unfortunately, 3-D deconvolution products require a high amount of memory and data resampling is often necessary, implying a loss of spatial resolution.

3-D data were retrieved by Pégard and Fleischer55 using 3-D deconvolution in microfluidic microscopy. In particular, the focal stack generated by tracking samples flowing into a tilted microfluidic channel [see Fig. 6(a)] and the system PSF [Fig. 6(b)] are processed in a Wiener deconvolution filter to extract size, position, orientation, and subcellular surface features of aggregated yeast cells, Fig. 6(c).

Graphic Jump LocationF6 :

(a) Focal stack and (b) point spread function (PSF) focal stacks are recorded in a deconvolution microfluidic microscopy. The three-dimensional (3-D) structure of the object is deconvolved and an iso-level surface showing the 3-D envelope of yeast particles is displayed (c). Images are from 55.

In diffractive tomography, Cotte at al.56 combined the theory of coherent image formation and diffraction. Through an inverse filtering obtained by a realistic coherent transfer function, namely 3-D complex deconvolution, they enabled the reconstruction of an object scattered field. The authors expected this technique to lead to aberration correction and improved resolution.

By combining the advantages of full-field frequency-domain optical coherence tomography with those of photorefractive holography, Koukourakis et al.57 proposed a system for a complete 3-D image. In their system, a 3-D stack of spectral interferograms is constructed to obtain depth information. This setup employs a wavelength scanning tunable laser as the light source, and the use of a photorefractive medium to holographically store the spectral interferograms obtained by scanning the wavelength.

Reconstruction in a Tilted Plane

We dedicate a separate section to techniques proposed over the years to solve the case of an image plane tilted with respect to the object one.

The need to propagate fields between tilted planes has probably increased with the advance of integrated optical circuits. They are often constructed with crystal structures that work efficiently only for certain directions, though usually not orthogonal to the optical axis. Furthermore, in modern biology and medicine, some techniques, like total internal reflection (TIR) holographic microscopy, are of great interest to perform quantitative phase microscopy of cell-substrate interfaces. Unfortunately, they use a prism that alters the geometry of the typical acquisition systems, thus requiring special solutions. Therefore, in all these cases and others, such as in tomographic applications, if one is interested in inspecting the object characteristics on a plane tilted with respect to the recording hologram one, such as illustrated in Fig. 7, it is more efficient to develop a method capable of reconstructing the hologram at arbitrarily tilted planes. Basically, this means simulating light propagation through diffraction calculation between arbitrarily oriented planes.

Graphic Jump LocationF7 :

Schematic illustration for reconstructing digital holograms on tilted planes.

Diffraction between arbitrarily oriented planes

Leseberg and Frère58 were the first who addressed the problem of describing the diffraction pattern of a tilted plane using Fresnel approximation. It is calculated by a Fourier transformation, a coordinate transformation, and a multiplication by a quadratic phase.

Later on, a general-purpose numerical method for analyzing optical systems by the use of full scalar diffraction theory was proposed by Delen.59 His approach is based on Rayleigh-Sommerfeld diffraction and it can be applied to wide angle diffraction. In particular, the author proposed two methods, one for shifted plane and the other one for tilted plane, and these can be sequentially combined for shifted and tilted planes. This is a very advantageous feature, because other methods are limited to rotation around one axis. For example, Yu et al.60 used Fourier transform method (FTM) for numerical reconstruction of digital holograms with changing viewing angles.

Certainly, the use of the a plane waves angular spectrum and coordinates rotations represents a more flexible solution. Initially, Tommasi and Bianco61 proposed a technique for finding the relation between the plane-wave spectra of the same field, with respect to two coordinate systems rotated only with respect to each other, to calculate the computer-generated holograms of off-axis objects. Subsequently, De Nicola et al.62 and Matsushima63 proposed methods to obtain the EFI of objects or the target recorded on inclined planes, by taking the angular spectrum into consideration.

The angular spectrum-based algorithm for reconstructing the wave field on arbitrary inclined plane basically consists of two steps. In the first one, the angular spectrum A(u,v;d) is calculated on an intermediate plane (x-y) at distance d. Standard transformation matrix is then used to rotate the wave vector coordinates. This matrix is, in general, given as a rotation matrix Ry(θy) or the product of several rotation matrices. Display Formula

Rx(θx)=(1000cosθxsinθx0sinθxcosθx);Ry(θy)=(cosθy0sinθy010sinθx0cosθy);Rz(θz)=(cosθzsinθz0sinθzcosθz0001).(23)

In the second step, the rotate spectrum is inverse Fourier transformed to calculate the reconstructed wave field on the tilted plane, namely Display Formula

b^(x^,y^)=F1{A^(u^cosθy+w^sinθy,u^sinθxsinθy+v^cosθxw^sinθxcosθy;d)}.(24)

It should be remarked that reconstructing the field according to Eq. (24) is valid within the paraxial approximation. However, it can be generalized to include frequency-dependent terms of the Jacobian associated to rotation. Furthermore, the spectrum should be shifted in the reference Fourier space. Since the complex value of the spectrum has to be obtained for each sampling point on the equidistant sampling grid, interpolation is needed because of the nonlinearity attributed to Eq. (24).

In summary, in these cases, fast Fourier transformation is used twice, and coordinate rotation of the spectrum enables one to reconstruct the hologram on the tilted plane. Interpolation of the spectral data is shown to be effective for correcting the anamorphism of the reconstructed image.

In Fig. 8, a case of two-axis rotation is shown.63 The planar object is slanted at 30 deg around the y axis after rotation at 60deg around the x axis. Therefore, the transformation matrix T=Ry(30deg)Rx(60deg) is used to retrieve the original pattern.

Graphic Jump LocationF8 :

Amplitude images (a) reconstructed in the parallel plane and (b) in the tilted plane reconstructed by using rotational transformation for two-axis rotation. The planar object is rotated at 60deg around the x axis prior to rotation at 30 deg around the y axis. Images are from 63.

Since these methods suffer from the loss of resolution problem, Jeong and Hong64 have presented an effective method for the pixel-size-maintained reconstruction of images on arbitrarily tilted planes. The method is based on the plane wave expansion of the diffraction wave fields and on the three-axis rotation of the wave vectors. The images on the tilted planes are reconstructed without loss of the frequency contents of the hologram and have the same pixel sizes. For example, Fig. 9(a) presents the hologram reconstruction of a 1951 USAF target rotated by θy=45deg and θx=40deg on the plane parallel to the CCD plane at z=2.20cm using the ASM. The resolution target’s center is located on the optic z axis at 2.42 cm in front of the CCD plane. It can be seen that the left-hand upper corner of the image is focused, while the other parts are out of focus because of object tilting. Figure 9(b) is the image at z=2.42cm reconstructed with a correction method; it is focused across the whole area of the resolution target, but its pixel size is 0.7times smaller than that of the hologram due to the scaling caused by the FFT. The image in Fig. 9(d), which was reconstructed by their method, is focused across the whole plane, and the ratio between the x and the y dimensions of the reconstructed resolution target is the same as that of the real object, which proves that their method can faithfully reconstruct images on the tilted planes.

Graphic Jump LocationF9 :

Reconstructed images of the tilted resolution target (640×480pixels). (a) Image on the plane parallel to the CCD at z=2.20cm reconstructed with the ASM. (b) Images on the tilted plane at z=2.42cm reconstructed from the whole area. (c) Image reconstructed by the method in 64.

The popularity of these techniques is now so extended that they are also successfully applied in many others fields such as biology.6567 In particular, in the paradigm of TIR holographic microscopy, Ash et al. used angular spectrum rotation for imaging organisms, cell-substrate interfaces, adhesions, and tissue structures. Figure 10 shows a basic configuration of the interferometer for digital holographic microscopy of TIR. The object beam enters the prism and undergoes TIR at the hypotenuse A of the right-angle prism. The presence of a specimen on the prism surface modulates the phase front of the reflected light. Thanks to the prism presence, the object plane A optically appears to the camera, or to the plane H, at a certain angle of inclination, so an en face reconstruction result requires an algorithm that accounts for such an anamorphism.

Graphic Jump LocationF10 :

Apparatus for DH of total internal reflection. BS, beam-splitters; M, mirrors; L, lenses; A, object plane; H, hologram plane. Image is from 65.

In Fig. 11, the numerical correction procedure is depicted. The sample, Allium cepa (onion) cells, resides on the prism face and provides a direct image as shown in Fig. 11(a). With the addition of the reference beam, the CCD camera captures the hologram created by the superposition [Fig. 11(b)]. At that point, the hologram is processed into Fourier space, including filtering [Fig. 11(c)]. The complex array comprising the angular spectrum is then transformed back into real image space, yielding both the amplitude and the phase information [Figs. 11(d) and 11(e)]. If the untilting process is included in the reconstruction, the results are depicted in Figs. 11(g) and 11(h). In Fig. 11(f), a typical en face direct image of onion tissue is presented for comparison.

Graphic Jump LocationF11 :

Process of digital holographic microscopy with untilt via the angular spectrum method: engineering run with onion tissue (A. cepa). (a) Direct image with tilt. (b) Hologram. (c) Angular spectrum filtering first-order peak. (d) Amplitude image reconstruction with inherent tilt. (e) Phase image with inherent tilt. (f) Typical en face direct image of A. cepa. (g) Untilted (and transposed) amplitude. (h) Phase image. Image is from 66.

A 3-D version of this approach was introduced by Onural.68 He used the impulse function over a surface as a tool, converting the original 2-D problem to a 3-D problem. Even though its formulation is analytically correct, it is proposed only for the continuous case.

Another method has been suggested by Lebrun et al.69 to extract information about a 3-D particle field in arbitrary tilted planes by DH. In particular, he used wavelet transform to reconstruct small particles in a plane whose orientation is arbitrary as specified by the user. The pixels, whose 3-D coordinates belong to this plane, are selected and juxtaposed to rebuild the particle images.

More recently, a partial numerical Fresnel propagation technique of the complex wave has been proposed70 for tilted image planes refocusing, and some solutions are used to reduce the influence of aliasing and Fresnel diffraction in the process of numerical reconstruction. A scaled Fourier transform is used instead in 71 to calculate light diffraction from a shifted and tilted plane. It seems to be faster than calculating the diffraction by a Fresnel transform at each point, see, for instance, 72, and this technique can be used to generate planar holograms from computer graphics data.

To simplify FTM and, at same time, solve the pixel size consistency problem, Wang et al.73 presented a GPU-based parallel reconstruction method for EFIs of tilted objects. In summary, they used fast Fourier transform pruning with frequency shift combined with coordinate transformation. Their method has high imaging precision and speed, but it requires GPU assistance and some specific knowledge.

Generally, existing numerical methods for refocusing between inclined planes need a priori knowledge of the input scenes, such as the object size, and the average reconstruction tilting angle or distance, to properly adjust the EFI algorithm. Such a priori knowledge is easy to achieve in an academic experiment, but it is usually unknown for real experiments. In 74, Kostencka et al. proposed an appropriate tool for automatic localization of a tilted optimum focus plane. The method is based on the estimation of the focusing condition of the optical field by evaluating the sharpness in its amplitude distribution. The developed algorithm is fully automated. It consists of two major steps: first the rotation axis is localized from the map of local sharpness and then the angular orientation of the image plane is derived by maximizing the focus of optical fields reconstructed in many subsequent tilted planes.

In the case of a highly tilted plane or 3-D shapes with high gradients, the strategies described so far have encountered several problems. For DH in microscopic configuration, two reconstruction algorithms are presented by Kozacki et al.75 The first is an extension of the well-known thin element approximation for tilted geometry, which can be applied to the case of large sample tilts, but it requires the sample numerical aperture to be low. The second one is called the tilted local ray approximation algorithm, and it is based on the analysis of local ray transition through a measured object. The authors proposed a modified algorithm for the numerical propagation between tilted planes, which can be applied for the shape characterization of tilted samples with a high shape gradient.

Phase plate

In conventional microscopy, another possible solution for extended DOF is wavefront coding. This method was introduced by Dowski et al.810 more than a decade ago. Wavefront coding introduces a known, strong optical aberration that dominates all other terms, like defocus. This circumstance causes the optical system to be essentially focus-invariant over a large range, so straightforward computational tools can be used to recover image information.

Under this imaging paradigm, several variants have been proposed.12,76,77 Quirin et al.77 have used a wavefront coded imaging system coupled to a spatial light modulation (SLM)-based illumination system, see Fig. 12, to image fluorescence from multiple sites in three dimensions, both in scattering and transparent media.

Graphic Jump LocationF12 :

Design of extended depth of field (EDOF) microscope. (a) Experimental configuration of the joint spatial light modulation and EDOF imaging microscope for 3-D targeting and monitoring. The detailed description of each component is described in Sec. 4.1 of 77. The phase aberration shown in (b) is the ideal diffractive optical element for the cubic-phase modulation and placed in an accessible region between L9 and L10 without affecting the illumination pupil. The experimental PSF of the imaging system is presented for the conventional microscope in (c) and the EDOF microscope in (d). The 3-D volumes in (c) and (d) represent the 50% intensity cutoff of each axial plane and the axis units are in micrometer.

For example, experimental results for the 3-D SLM illumination in transparent media, with both the conventional and extended DOF microscope, are shown in Fig. 13. In this case, a sample is translated axially 500μmδz+500μm from the classical focal plane (defined as dz=0) in 4-μm intervals while another is held fixed in the focal plane (600 μm below the surface of the media), as shown in Fig. 13(a). The results from a conventional imaging microscope are presented in Fig. 13(b). For SLM microscopy, a rapid loss of imaging performance occurs as the illumination translates beyond the narrow focal plane. In contrast, the restored image from the extended DOF microscope is presented in Fig. 13(c), which shows a relative increase in the out-of-focus signal and tightly localized points, regardless of axial location. Although the results are clearly visible and noticeable, their extended DOF microscope requires a priori information on the target locations imprinted on the system by user.

Graphic Jump LocationF13 :

The 3-D illumination pattern is shown in (a). The results from imaging the 3-D pattern in bulk fluorescent material are given for the conventional microscope (b) and the EDOF microscope (c). Images are from 77.

In analogy to what is proposed by Dowski, Matrecano et al.78 showed that a cubic phase plate (CPP) can be easily and conveniently included into the numerical reconstruction of digital holograms for enhancing the DOF of an optical imaging system and for recovering the EFI of a tilted object in a single reconstruction step. Moreover, they offered clear empirical proof through different appropriate experiments: the first one on an amplitude target and the others on biological samples. The advantage is in the possibility of avoiding the use of real optical components together with the related complex fabrication process required by a continuous cubic phase plate with a high phase deviation.

They propose to modify the numerical reconstruction algorithm. In particular, the hologram is multiplied by a numerical CPP, with a pupil function given by Display Formula

T(ξ,η)=ejα(ξ3+η32R3),|ξ|R,|η|R,(25)
where R is the half width of a square CPP and α is a phase modulation factor determining the maximum phase deviation along the axes, given by α=2πβ/λ. The simulated phase distribution of a numerical CPP with α=14π and R=3.43mm is shown in Fig. 14(a). A phase distribution of this kind is really difficult to fabricate because of its high phase deviation; in fact, it is typically decreased into a relief structure with a 2π phase modulation; see Fig. 14(b). Since in a numerical problem formulation this process is unnecessary, very high phase deviations can be easily realized.

Graphic Jump LocationF14 :

Phase distribution of a two-dimensional (a) and one-dimensional cubic phase plate (CPP), along the x-coordinate (c). In (b) and (d) are shown their mod-2π representation.

In Eq. (25), a general 2-D phase delay is expressed as a function of both spatial coordinates. But, if an object is tilted by θ angle around the vertical y axis during the reconstruction, the defocus varies along only the horizontal x axis. Taking this into account, they modified the phase delay allowing it to become function only of the x coordinate. Moreover, this consideration allows one to somehow interpret the cubic term influence within the reconstruction process. In general, quadratic terms44,50 are used to compensate the defocus. In this case, it is not uniform but varies along the spatial coordinate. The effect is to change very little the areas near the focus distance d very little, and, proportionally, to change the distant ones much more. The use of a numerical CPP, instead of a physical one, has the great advantage of increasing the system flexibility. In fact, by varying the amount of phase delay (α value) and the plate width (R value), they can obtain an EFI notwithstanding the tilt angle or the image size.