PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The complexity of Shear Beam Imaging has driven performance analysis to a detailed computer simulation. Here we present a semi-rigorous theoretical model for imaging performance which includes the effects of light level, finite aperture detection, and speckle motion. We present a metric for quantifying the quality of images produced, called here the Incoherent Pseudo Imaging Strehl (IPS) which is amenable to analysis. Theoretically predicted values of IPS are compared with those computed via simulation. The theory agrees well with the simulated results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In sheared beam imaging, a target is coherently illuminated with three sheared and modulated beams. Both target motion and the temporal distribution of the laser illumination affect image recovery. Movement of the target will cause a corresponding motion of the target's speckle pattern on the ground. As the speckle pattern travels across the detector, each integration period, or bin, in a single laser pulse, or frame, will record a slightly different speckle distribution. This phenomenon effects a smearing of the speckle formation from bin to bin within a single frame, and subsequently degrades the recovered image. Image recovery is further complicated by temporally nonuniform illumination. Both smearing and nonuniform pulse shapes create distortions in the spatial and temporal distributions of the speckle patterns that garble image recovery. This paper documents the derivation of the governing equations for the simulation of sheared beam imaging in the presence of these space-time distortions. In addition, we present algorithms for alleviating these distortions prior to image reconstruction. We conclude with simulation results showing the effect of the space-time distortions on sheared beam image recovery and the improvement achieved with the post-detection deblurring and pulse correction algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The transverse component of velocity for a moving laser-illuminated target is determined using computer image processing. Active imaging is employed in which the target is coherently illuminated producing a speckled image in the pupil plane of a CCD array. Multiple image scans are recorded following the same procedure used to retrieve the incoherent target image. This data is electronically processed to produce an interference pattern dependent upon the distance traveled between image scans. The velocity component transverse to the line of sight of the system is calculated using the distance traveled and time elapsed between scans. The axial component of velocity is shown to reduce the visibility of the interference pattern.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of information about an image in addition to measured data has been demonstrated to provide the possibility of decreasing the noise in the measured data. A new constraint, recently proposed, is that of perfect knowledge of part of an image. In this paper, these results are generalized and the usefulness of this new constraint to decrease noise outside the region of prior knowledge is shown to be a function of the measured data noise-correlation properties. In particular, it is shown that prior high-quality knowledge is a generalization of support constraints.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Adaptive optics systems have been used to overcome some of the effects of atmospheric turbulence on large aperture astronomical telescopes. Field experience with adaptive optics imaging systems making short exposure image measurements has shown that some of the images are better than others in the sense that the better images have higher resolution. In this paper we address the issue of selecting and processing the best images from a finite data set of compensated short exposure images. Image sharpness measures are used to select the data subset to be processed. Comparison of the image spectrum SNRs for the cases of processing the entire data set and processing only the selected subset of the data shows a broad range of practical cases where processing the selected subset results in superior SNR. Preliminary results indicate that the effective average point spread functions for applying frame selection to extended objects and point sources under equivalent seeing conditions are nearly identical. Thus, deconvolution could be applied to images obtained through frame selection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Restoration of thermal images distorted by the atmosphere, detected with a focal plane array (FPA) Pt-Si thermal imaging system, is presented. The restoration method is based upon atmospheric modulation transfer function (MTF) analysis. Using turbulence and aerosol MTF prediction models, atmospheric distortions and image degradation are modeled. Restoration results indicate significant improvement in image quality. However, it is critical to include the unique shape of aerosol MTF when modeling atmospheric MTF in order to obtain good restoration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to noise processes and resolution limitations, the act of measurement of a particular physical quantity or quantities, Q, leads to a data set which only crudely describes the quantities of interest. Quite naturally, scientists have developed a number of methods which attempt to optimally extract the values, H (or hypothesis), of these underlying quantities from these flawed data sets. This paper describes the theory of the pixon, the fundamental unit of information in a recorded data set. Describing the data in this representation (co-ordinate system of basis) provides an efficient means of extracting the underlying properties. The advantages provided by the pixon description can be understood in terms of Bayesian methods where the pixon basis forms a model with a highly optimized prior. We also show the connection between the pixon concept and Algorithmic Information Content and how pixons can be thought of as a generalization of the Akaike Information Criterion. In addition, the relationship between pixons and 'coarse graining' and the consequences of measurement uncertainty are related to the role of the Heisenberg uncertainty principle in introducing degeneracy in the phase space description of statistical mechanics. Finally, we describe our most current formulation of the Fractal Pixon Basis (FPB) and supply examples of image restoration and reconstruction drawn from the field of astronomical imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The notion of a multiresolution support is introduced. This is a sequence of boolean images, related to significant pixels at each of a number of resolution levels. The multiresolution support is then used for noise suppression, in the context of iterative image restoration. Algorithmic details and examples illustrate this approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the super-resolution method that we use for image restoration is the Poisson Maximum A-Posteriori (MAP) super-resolution algorithm of Hunt, computed with an iterative form. This algorithm is similar to the Maximum Likelihood of Holmes, which is derived from an Expectation/Maximization (EM) computation. Image restoration of point source data is our focus. This is because most astronomical data can be regarded as multiple point source data with a very dark background. The statistical limits imposed by photon noise on the resolution obtained by our algorithm are investigated. We improve the performance of the super-resolution algorithm by including the additional information of the spatial constraints. This is achieved by applying the well-known CLEAN algorithm, which is widely used in astronomy, to create regions of support for the potential limited optical system is used for the simulated data. The real data is two dimensional optical image data from the Hubble Space Telescope.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method is presented in which an image, degraded by a linear shift- variant imaging system, will undergo a warping such that the resulting warped image will be approximately described by a warped original image filtered by a linear shift-invariant system. The purpose of this distortion is to make the shift-variant impulse response, which can approximately be viewed as a shift-invariant impulse response which has been warped in the original image domain, vary as little as possible. In particular cases, a transformation can be found which results in no impulse response variations. For most cases, however, the impulse response will still possess some shift-variance. A measure of shift-variance is presented, and introduced into a optimization problem which seeks to minimize the shift-variance of a system. This residual variance will be ignored (this error must be small in order for this method to work well), and an 'average' impulse response in the warped domain will be assumed. This allows for shift- invariant restoration of the warped image, with all of its attendant advantage is speed and reduced complexity. An example of a smooth space-variant one-dimensional impulse response is applied to a variant of this optimization problem. The limitations of this slightly different problem are explained, and the expected properties of the stated problem are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The identification of the point spread function (PSF) from the degraded image data constitutes an important first step in image restoration that is known as blur identification. Though a number of blur identification algorithms have been developed in recent years, tow of the earlier methods based on the power spectrum and power cepstrum remain popular, because they are easy to implement and have proved to be effective in practical situations. Both methods are limited to PSF's which exhibit spectral nulls, such as due to defocused lens and linear motion blur. Another limitation of these methods is the degradation of their performance in the presence of observation noise. The central slice of the power bispectrum has been employed as an alternative to the power spectrum which can suppress the effects of additive Gaussian noise. In this paper, we utilize the bicepstrum for the identification of linear motion and defocus blurs. We present simulation results where the performance of the blur identification methods based on the spectrum, the cepstrum, the bispectrum and the bicepstrum is compared for different blur sizes and signal-to-noise ratio levels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Images restored by linear shift-variant and nonlinear restoration techniques contain spectral energy beyond the optical system cutoff frequency. We have recently analyzed this aspect of image superresolution in terms of accurate extrapolation of the image spectrum. A closed-form expression for an approximate lower bound on extrapolation performance has been derived for continuous signals. In this paper, we present new empirical evidence in support of the derived bound. We then discuss performance for the discrete imaging case, including a discrete analogy to the analytic continuation theorem. Empirical results are presented which support our hypothesis that substantial bandwidth extrapolation is achievable for discrete images. Finally, we discuss potential applications of these results to the development of new restoration algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We investigate the problem of signal restoration and reconstruction in a multi-channel system with the constraint that the entire system acts as a projection operator. This projection requirement is optimal in the noise free case since an input signal which is contained in the reconstruction space is recovered exactly. We find a general optimization problem which gives rise to a large class of projection operators. This formalization allows optimization of various criteria while enforcing the projection constraint. In this paper, we consider the projection operator which minimizes the noise power at the system output. The significance of this work is that it incorporates knowledge of the final reconstruction method which can include splines, wavelets, or display devices. In addition, unlike most classical formulations, the input signal is not required to be band- limited; it can be an arbitrary finite energy function. The approach requires no a priori information about the input signal, but does require knowledge of the impulse responses of the input channels. The projection method is compared to a generalized multi-channel Wiener filter which uses a priori signal information. At best the projection approach achieves the least squares solution which is the orthogonal projection of the input signal onto the space defined by the reconstruction method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The estimation of the intensity function of a Poisson-driven shot- noise process is addressed using a regularization technique, where the data is modeled as a signal term plus a signal-dependent noise term. The approach used requires that the estimates belong to some finite- dimensional subspace. This paper investigates the effect of the choice of subspace on the estimates produced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is the result of a question that was raised at the recent workshop on 'The Restoration of HST Images and Spectra II', that took place at the Space Telescope Science Institute in November 1993, for which there was no forthcoming answer at that time. The question was: What is the null space (ghost images) of the Richardson-Lucy (RL) algorithm? Another question that came up for which there is a straight-forward answer was: What does the MLE algorithm really do? In this paper we attempt to answer both questions. This paper will begin with a brief description of the null space of an imaging system, with particular emphasis on the Hubble telescope. The imaging conditions under which there is a possibly damaging null space will be described in terms of linear methods of reconstruction. For the uncorrected Hubble telescope, it is shown that for a PSF computed by TINYTIM on a 512 X 512 dimension, there is no null space. We introduce the concept of a 'nearly null' space, with an unsharp distinction between the 'measurement' and the 'null' components of an image and generate a reduced resolution Hubble Point Spread Function (PSF) that has that nearly null space. We then study the propagation characteristics of null images in the Maximum Likelihood Estimator (MLE), or Richardson-Lucy algorithm, and the nature of its possible effects, but we find in computer simulations that the algorithm is very robust to those effects: if they exist, the effects are local and tend to disappear with increasing iteration numbers. We then demonstrate how a PSF that has small components in frequency domain results in noise magnification, just as one would expect in linear reconstruction. The answer to the second question is given in terms of the residuals of a reconstruction and the concept of feasibility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have implemented a least-squares technique for recovering phase information and alignment parameters from simultaneously obtained focused and defocused solar images. Small subfields are used, in order to deal with anisoplanatism. The method is applied to sequences of 100 8-bit solar granulation images. These data enable a number of consistency tests, all of which demonstrate that the technique works. Alignment parameters derived from averaged images in a sequence are highly consistent and wavefronts derived from different subfields and different sequences recorded close in time are virtually identical. The wavefronts derived from averaged images are also virtually identical to the average of wavefronts derived from individual images. These aberrations vary with time in a way which is consistent with a major contribution from the moving elements of the alt-az tower telescope. Independently derived wavefronts from single images show high correlation between neighboring subfields and smooth variations across large fields-of-view, consistent with the impression that the image quality is more or less uniform across the image. Restored images in a sequence show a high degree of consistency and much more fine structure than the corresponding observed images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Phase-diverse speckle imaging is a novel imaging modality that makes use of both speckle-imaging and phase-diversity concepts. A phase- diverse speckle data set consists of one conventional image and at least one additional image with known phase diversity for each of multiple atmospheric phase realizations. We demonstrate the use of a phase-diverse speckle data set collected at the Swedish Vacuum Solar Telescope on La Palma to overcome the effects of atmospheric turbulence and to restore a fine-resolution image of solar granulation. We present preliminary results of simultaneously reconstructing an object and a sequence of atmospheric phase aberrations from these data using a maximum-likelihood parameter- estimation framework. The consistency of the reconstructions is demonstrated using subsets of the sequence of images pairs. The use of different phase-aberration parameterization schemes and their affect on parameter estimates are discussed. Insight into the desired number of atmospheric realizations is provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is well known that atmospheric turbulence severely degrades the performance of ground based imaging systems. Techniques to overcome the effects of the atmosphere have been developing at a rapid pace over the last 10 years. These techniques can be grouped into two broad categories: pre-detection and post detection techniques. A recent newcomer to the post detection scene is 'deconvolution from wave front sensing' (DWFS). DWFS is a post-detection image reconstruction technique that makes use of one feature of pre- detection techniques. A WFS is used to record the wave front phase distortion in the pupil of the telescope for each short exposure image. The additional information provided by the WFS is used to estimate of the system's point spread function (PSF). The PSF is then used in conjunction with the ensemble of short exposure images to obtain and estimate of the object intensity distribution via deconvolution. With the addition of DWFS into the suite of possible post detection image reconstruction techniques it is natural to ask 'How does DWFS compare to both traditional linear and speckle image reconstruction techniques?' In the results presented here we make a direct comparison based on a frequency domain signal-to-noise ration performance metric. This metric is applied to each technique's image reconstruction estimator. We find that such as Wiener filtering. On the other hand, DWFS does not always out perform speckle imaging techniques and in cases that it does the improvement is small.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a method to obtain diffraction limited spectrograms of two- dimensional extended objects from a series of ground-based slit- spectrograms. The method is a combination of differential speckle interferometry and a rapid spectrograph scanning scheme. The slit of a spectrograph is scanned over the solar surface while simultaneous images of the reflective slit plate (slit-jaw images) and spectrograms are recorded with an exposure time that is short with respect to seeing-induced variations. A Knox-Thompson speckle reconstruction scheme is applied to the slit jaw images. From the individual slit- jaw images and the speckle reconstruction the instantaneous point spread function can be determined for any location along the slit. The recorded spectrograms can then be restored with the inverted linear operator that describes the formation of the spectrograms. The method has been applied to observations of the quiet solar granulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The method of phase diversity has been used in the context of incoherent imaging to estimate jointly an object that is being imaged and phase aberrations induced by atmospheric turbulence. The method requires a parametric model for the phase-aberration function. Typically, the parameters are coefficients to a finite set of basis functions. Care must be taken in selecting a parameterization that properly balances accuracy in the representation of the phase- aberration function with stability in the estimates. It is well known that over parameterization can result in unstable estimates. Thus a certain amount of model mismatch is often desirable. We derive expressions that quantify the bias and variance in object and aberration estimates as a function of parameter dimension.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical wavefront errors can be determined from focused-defocused image pairs. We have developed on optical test facility at Lockheed for the purpose of investigating real-time wavefront control and post- processing algorithms using Phase Diversity techniques. Experimental results indicate that it is possible to control and correct for at least 6 parameters in the wavefront, at a bandwidth of about 10 Hz.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A theoretical description of Phase Diversity is done in some detail to lay the foundations for the experimental effort. The Phase Diversity algorithm is formulated in the context of nonlinear programming where a metric is developed and then minimized. This development will show how the Zernike coefficients can be solved for directly using nonlinear optimization techniques. Computer simulations are used to validate the algorithms and techniques. The results of the computer simulations are shown. Once the confidence of the algorithms and techniques are established in the computer simulations, they are used on actual laboratory data. Detailed discussion of the laboratory implementation will be described and the laboratory results will be shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the preliminary results of a laboratory experiment using phase diversity as a wavefront sensor. Computer simulations of this experiment were also performed. The phase diversity algorithm used the ordinary finite-difference method to solve the transport equation of intensity and phase. This method of phase diversity retrieves the phase directly and may prove to be useful for low light level applications and for extended objects. This entertains the possibility of using phase diversity as an on-line wavefront sensor for adaptive optics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
How cells move and navigate within a 3D tissue mass is of central importance in such diverse problems as embryonic development, wound healing and metastasis. This locomotion can now be visualized and quantified by using computation optical-sectioning microscopy. In this approach, a series of 2D images at different depths in a specimen are stacked to construct a 3D image, and then with a knowledge of the microscope's point-spread function, the actual distribution of fluorescent intensity in the specimen is estimated via computation. When coupled with wide-field optics and a cooled CCD camera, this approach permits non-destructive 3D imaging of living specimens over long time periods. With these techniques, we have observed a complex diversity of motile behaviors in a model embryonic system, the cellular slime mold Dictyostelium. To understand the mechanisms which control these various behaviors, we are examining motion in various Dictyostelium mutants with known defects in proteins thought to be essential for signal reception, cell-cell adhesion or locomotion. This application of computational techniques to analyze 3D cell locomotion raises several technical challenges. Image restoration techniques must be fast enough to process numerous 1 Gbyte time-lapse data sets (16 Mbytes per 3D image X 60 time points). Because some cells are weakly labeled and background intensity is often high due to unincorporated dye, the SNR in some of these images is poor. Currently, the images are processed by a regularized linear least- squares restoration method, and occasionally by a maximum-likelihood method. Also required for these studies are accurate automated- tracking procedures to generate both 3D trajectories for individual cells and 3D flows for a group of cells. Tracking is currently done independently for each cell, using a cell's image as a template to search for a similar image at the next time point. Finally, sophisticated visualization techniques are needed to view the 3D movies of cell locomotion which are currently viewed simply by 2D projection along a given angle at each time point of the movie.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The way in which a reflection or transmission microscope image depends on the variations in complex refractive index in the object is considered. Reflection objects including rough surfaces and stratified media are discussed, and a generalized form for the scattering potential for reflection imaging in the Born approximation proposed. Reconstruction of the refractive index variation in a stratified medium is considered. Transmission imaging formation based on the Born and Rytov approximations is discussed. These properties are important for interpretation of the resulting images, and also as a basis for digital restoration methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Timothy J. Holmes, Santosh Bhattacharyya, Joshua A. Cooper, David K. Hanzel, Vijaykumar Krishnamurthi, Wen-Chieh Lin, Badrinath Roysam, Donald H. Szarowski, James N. Turner
Blind deconvolution algorithms are being developed for reconstructing (deblurring) 2D and 3D optically sectioned light micrographs, including widefield fluorescence, transmitted light brightfield and confocal fluorescence micrographs. The blind deconvolution concurrently reconstructs the point spread function (PSF) with the image data. This is important because it obviates the necessity to measure the PSF. The PSF measurement is esoteric and sometimes impossible and it thereby will hinder wide routine biological and clinical usage. The iterative algorithms are primarily based on a stochastic model of the physics of fluorescence quantum photons and the optimization criterion of maximum likelihood estimation (MLE), as extended from precursory nuclear medicine MLE algorithms. The algorithm design is mostly model based, although it contains some non- model-based components which have important practical benefits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computational optical sectioning microscopy with a non-confocal microscope is fundamentally limited because the optical transfer function (OTF), the Fourier transform of the point spread function (PSF) is exactly zero over a conic region of the spatial frequency domain. Because of this missing cone of optical information, images are potentially artifactual. To over come this limitation superresolution, in the sense of band extrapolation, is necessary. We present an analysis of how the expectation maximization algorithm for maximum likelihood image estimation achieves this superresolution. We also present experimental results showing this superresolving capability. In practice this capability is often compromised because the PSF has to be truncated to a limited spatial support for algorithm implementation. Therefore we also present an analysis of the sensitivity of superresolution to truncation of the PSF to a finite extent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three-dimensional (3-D) microscopy with a non-confocal microscope is fundamentally limited because the optical transfer function (OTF) is zero over a cone-shaped region of the spatial frequency domain and thus the image has a missing cone of optical information that potentially results in artifacts. The strictly confocal scanning microscope (i.e. one with only one pinhole-aperture that is infinitesimally small) does not suffer from this missing cone and thus images are not artifactual. However the pinhole aperture has very low light collection efficiency and images have low signal-to-noise ratio (SNR). Because of this low light-collection efficiency the scanning microscope is commonly operated in a partially confocal regime either by using a larger confocal aperture, multiple confocal apertures working in parallel, or a combination of both. With a larger aperture more out-of-focus fluorescence is collected, with multiple apertures more out-of-focus excitation is produced. In either case the optical axis resolution is degraded relative to that of the ideal confocal microscope. Fortunately neither approach suffers form a missing cone. Frequency components are attenuated relative to the strictly confocal case, but they are no completely missing. We present results that show that applying image restoration methods to partially confocal images it is possible to obtain artifact free images with the same or better resolution than with a strictly confocal microscope.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Air Force Phillips Laboratory is upgrading the surveillance capabilities of its AMOS facility with a coherent laser radar system. A notable feature of this laser radar system is its short (approximately equals 1 ns) pulselength which allows high resolution range data to be obtained. The usefulness of this range data for use in reflective tomographic reconstructions of images of space objects is discussed in this paper. A brief review of tomography is given. Then the capability of the laser radar system to provide adequate range-resolved data is analyzed, both in terms of system parameters and signal-to-noise issues. Sample image reconstructions are presented and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The propagation of noise in wavefront reconstructors has been investigated for the case of spatially uncorrelated noise. In some situations, the detection noise may be spatially correlated. Here we investigate the noise propagation for the case of spatially correlated noise. We compute the error in wavefront reconstruction as a function of correlation length and noise strength. Results are presented describing noise propagation in a non-iterative complex wavefront reconstructor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Air Force Phillips Laboratory is developing a coherent laser radar system to upgrade its space surveillance capabilities. Because of the short pulse length of this laser system, range resolved information can be obtained. This range information can be used to reconstruct images by reflective tomography. This paper presents results of simulations using four different transmission tomography algorithms to reconstruct images from reflective tomography data. The transmission tomography problem formulation is stated, a description of reflective tomography is given, and results of the simulations are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The by now well-known matching pursuit method of S. Mallat as well as the recently proposed orthogonal matching pursuit work purely sequential and are based on the idea of choosing only one single atom at a given time. Pursuing ideas which are related to modifications of the POCS method, we suggest a new type of orthogonalization procedure which allows to operate in parallel at different 'levels'. More precisely, we assume that we have a dictionary which consists of similar 'pages', i.e. those pages are collection of functions, generated from a single function by translations along a subgroup which is the same for all such pages. Based on a certain hierarchical structure (preference of pages) we apply an appropriate Gram-Schmidt type orthogonalization procedure which allows to deal with the matching pursuit problem in parallel at different levels. After carrying out appropriate approximations at the different, now orthogonal levels, one comes back to a representation based on the given family of atoms, in a straight-forward way.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For superresolving image restoration it is suggested to use unsymmetrical matrix representation of image system's pulse response function in Karhunen-Loeve basis. Basic functions of such representation are defined by Karhunen-Loeve decomposition of object and image ensembles. Finite dimensional projection of initial and distorted images on their Karhunen-Loeve bases decreases the order of inverse problem matrix, minimizes noise and ensures stable superresolved restoration of large dimension images under high noise level, when application of traditional superresolving procedures is impossible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The effects of the wavelength multiplexing on the quality of an image transferred through a fiber bundle system is discussed by using the information capacity of an imaging system. The image transfer through a conventional fiber bundle can be thought of as a discrete sampling of the illuminance of the image at the entrance and of the bundle by each fiber element. This discrete sampling limits the band width of signals which can be transmitted. Also, the ends of the component fibers form an obtrusive pattern in the received image. Because, in the dispersion fiber bundle system, each fiber integrates the flux falling upon its entrance aperture, the entire picture format dispersion scan is reproduced at the frequency response characteristic of a uniform disk whose diameter is equal to that of the component fibers. Also, the individual fiber ends are thus blurred out and the obtrusive pattern formed by them is destroyed. It is shown, in theory, that the use of a dispersion method makes it possible to reduce the value of the light transmission nonuniformity contrast and to improve the resolution of fiber bundle image system, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In modal compensation of atmospheric turbulence using Zernike polynomials, aliasing has been found to be serious for large sub- aperture configurations. In order to reduce the influence of aliasing on the residual error after modal correction, we have trained a neural network (NN) using simulated array images from a modified Hartmann- Shack wavefront sensor. The array images are derived from simulated atmospheric wavefronts following Kolmogorov turbulence. We find that Zernike coefficients predicted by the NN are more accurate than conventional methods. Using the first 28 Zernike modes, the residual error after modal-NN correction is nearly halved compared to what is obtained with a least-squares solution. In addition, the computation time using the NN is well suitable for real-time application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Atmospheric wavefront residual errors for Zernike compensation are calculated using both the spatial domain approach and the frequency domain approach. It is found that the approaches give identical results. The results are used to examine numerical solutions of atmospheric Karhunen-Loeve functions through the spatial domain approach (solving an integral equation) and the frequency domain approach (diagonalization of the Noll matrix). The obtained Karhunen- Loeve eigenvalues and eigenfunctions are used to simulate atmospheric wavefronts from different methods is discussed. We find that from a statistical point of view, the method described here is the best one for atmospheric wavefront simulation. The structure functions calculated from simulated wavefronts are used to obtain the Strehl ratio for partial correction so that a validity range can be inferred for the Marechal approximation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.