A binocular visual adaptive optics simulator with automatic convergence control and real time aberration measurement and correction for both eyes simultaneously is presented.
Spatial Light Modulators (SLMs) are widely used in several fields of optics such as adaptive optics or holographic displays. SLMs based on Liquid Crystal (LC) devices allow a dynamic and easy representation of two-dimensional phase maps. However, these devices have two main drawbacks, their elevated cost and large dimensions, which prevents them to be used in applications where compactness and low prices are a must. Here we present a more affordable and compact approach based on vertical aligned LC devices, with characteristics of phase modulation very similar to the widely used parallel aligned LC devices. We study the maximal Field of View for visual correction in a see-through system, where the displayed phase map is used for the correction of visual disorders, from refractive errors, to high order aberrations. To conclude we discuss the potential of using this SLM technology and approach as a key component in smart glasses, paving the way for the development of economic, compact and reliable smart glasses for vision correction, among other applications.
We have developed a hybrid adaptive optics visual simulator, combining two different phase manipulation technologies:
an optically-addressed liquid crystal phase modulator, with a relatively slow temporal response but capable of producing
abrupt or discontinuous phase profiles with high fidelity; and a membrane deformable mirror, restricted to smooth
profiles but with a temporal response allowing close-loop compensation of the eye's aberration fluctuations. As proof of
concept, objective results as a function of defocus are presented for a phase element structured as discontinuous radial
sectors, generated with the liquid crystal modulator while the deformable mirror was used to correct the system
aberrations and, further, to introduce the aberrations of two real subjects. The hybrid adaptive optics visual simulator is
specially intended as a tool for developing new ophthalmic optics elements, where it opens the possibility to explore
designs with irregularities and/or discontinuities.
A novel adaptive optics system is presented for the study of vision. The apparatus is capable for binocular operation. The
binocular adaptive optics visual simulator permits measuring and manipulating ocular aberrations of the two eyes
simultaneously. Aberrations can be corrected, or modified, while the subject performs visual testing under binocular
vision. One of the most remarkable features of the apparatus consists on the use of a single correcting device, and a
single wavefront sensor (Hartmann-Shack). Both the operation and the total cost of the instrument largely benefit from
this attribute. The correcting device is a liquid-crystal-on-silicon (LCOS) spatial light modulator. The basic performance
of the visual simulator consists in the simultaneous projection of the two eyes' pupils onto both the corrector and sensor.
Examples of the potential of the apparatus for the study of the impact of the aberrations under binocular vision are
presented. Measurements of contrast sensitivity with modified combinations of spherical aberration through focus are
shown. Special attention was paid on the simulation of monovision, where one eye is corrected for far vision while the
other is focused at near distance. The results suggest complex binocular interactions. The apparatus can be dedicated to
the better understanding of the vision mechanism, which might have an important impact in developing new protocols
and treatments for presbyopia. The technique and the instrument might contribute to search optimized ophthalmic
corrections.
Three-dimensional ultrahigh resolution optical coherence tomography (UHR OCT) and adaptive optics (AO) are combined using a liquid crystal programmable phase modulator (PPM) as a correcting device for the first time. AO is required for correcting ocular aberrations in moderate and large pupils in order to achieve high resolution retinal images. The capabilities of the PPM are studied using polychromatic light. Volumetric UHR OCT images of the living retina with AO, obtained with up 25000 A scans/s and high resolution (~5x5x3 μm; transverse (x) x transverse (y) x axial) are recorded, enabling visualization of interesting intraretinal morphological structures. Cellular retinal features, which might correspond to groups of terminal bars of photoreceptors at the level of the external limiting membrane, are resolved. Benefits and limitations of the presented technique are finally discussed.
The spatio-temporal Fourier transform is usually applied to determine the velocity of an object from a series of standard light intensity frames. In this paper the technique is extended to also determine the object acceleration. Although this technique is useful in standard illumination conditions, we have applied it to experimental low-light-level images, which require a shorter processing time.
Two methods to obtain the autocorrelation central value from clipped photon-counting data are presented. They are based on the statistical analysis of the total number of counts in one frame. One of the methods requires a smaller number of frames to provide a good statistical accuracy while the other has a wider intensity range of applicability. These are the first techniques that overcome the problem of the autocorrelation central hole due to the clipping in photon-counting detection.
We propose a technique to obtain directly from a series of photon-limited frames, the spectrum of an object moving with constant velocity. Instead os averaging statistical functions as the auto or triple correlations or their Fourier transforms, our method averages the series of frame spectra once the phase factor due to the movement has been removed. Two different procedures to obtain this phase factor are studied: the temporal derivative of the spectrum logarithm and the temporal Fourier transform of the series of spatial spectra. The latter method involves a larger number of calculations but it produces much better results, especially when only a small number of frames are available. Finally, the recovery technique is checked for a simulated experiment in which a one-dimensional object is reconstructed from a short series of photon-limited frames.
A previous paper showed that the spatial photocount average performed over a photon-limited image by a photon counting mask, and the subsequent histogram manipulation produce images which can be easily recognized by an observer. The results obtained were demonstrated as fairly good for images with a number of illuminated pixels greater than 0.8%. The aim of this paper is to determine the minimum number of illuminated pixels required for a reasonably good scene reconstruction.
We propose a series of procedures to construct an image as similar as possible to that detected in good illumination conditions (standard image), starting from a low light level (L3) image. In L3 conditions, only a small number of photopulses are detected in the whole image area. An image taken in these conditions appears like a few isolated light points over a dark background. This makes it nearly impossible to recognize an object represented on it. We have developed a method based on the L3 image statistics in order to estimate the intensity received by each pixel. This method consist of a spatial average performed by a photon counting mask and can be used to construct a standard image from only one L3 image. As a second step, we have studied some histogram operations to eliminate the heavy statistics dependence that remains in the post-mask image. The best results correspond to the histogram specification but, to perform it, it would be necessary to know the standard image histogram. The last step of our work is the development of a fitting method to obtain this standard image histogram. This fitting is based on the statistical behavior of the L3 image and can be done using only a post-mask histogram as data.
The result of detecting in low light level are images with only
a small number of photopulses. Only the pixels in which arrive the
photopulse have an intensity value different from 0.
The work presents an easy procedure for simulating low light
level images by taking an standard well illuminated image as a
reference. The images so obtained are composed by a few
illuniinated pixels on a dark background. The number of
illuminated pixels is less than the 1% of the total pixels number,
and hence it is difficult to recognize the original object.
A procedure for enhancement and recovery the original image
is described and applied to low light level images previously
simulated.
The result is a visual experiment, easy to be performed (using
a personal computer and a frame grabber), which state the
statistical nature of light.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.