On product overlay (OPO), with its continually shrinking budget, remains a constraint in increasing device yield. The OPO performance consists of both scanner and process-related contributors. Both groups need to be addressed and optimized to minimize the overlay in order to keep up with Moore’s law. Examples of process-related overlay contributors are wafer distortion due to patterned stressed thin films and/or etch. Masks can never be made identical since they represent different layers of the device. It has been shown that shape measurements of the wafer can help to correct for most process-induced wafer distortions up to the 3rd order. However, another contributor to overlay challenges is related to photomask flatness. Wafer overlay errors due to non-flatness and thickness variations of a mask need to be minimized. Overlay metrology capability lags the need for improved overlay control, especially for multi-patterning applications. In this paper, we present a new metrology method that generates a very high-resolution shape map of an entire optical photomask optimized for DUV lithography. The technique is measuring the wave front phase change of the reflected light from both the front and backside of a quartz photomask. In this paper we introduce Wave Front Phase Imaging (WFPI), a new method for measuring flatness of an optical photomask that generates a shape map based on local slope. It collects 810 thousand (K) data points on an 86.4mm × 86.4mm area with a spatial resolution of 96μm.
In this study we have developed a compact and versatile phase camera functioning as a wavefront sensor for macroscopy or microscopy applications. This device records two intensity images at different focal points and, with the integration of an electrically tunable lens (ETL), operates in real time. Working with intensity images allows achieving high resolutions, near the actual CCD/CMOS sensor resolution. Here we show the application of the camera in two very different scenarios, a macroscopic application, where the camera was coupled with a simple lens relay to study the behavior of a deformable mirror (DM); and characterize defocus and astigmatism in optical lenses. On the second example, the camera was attached directly to a microscope using a simple c-mount to follow human blood moving in real time.
In this study we have designed, assembled, and characterized a wavefront sensor that works with defocused intensity images and the wavefront phase imaging (WFPI) algorithm. This approach allows for the potential utilization of the entire sensor surface, enabling high-resolution operation. This sensor, equipped with an electrically tuneable lens (ETL), performs focus movements of more than 60 Hz, enough for real time applications. We have developed numerical tools, as a practical software environment, with a graphical user interface (GUI), to make the camera a versatile instrument easily adaptable to different experimental setups without drastic changes in the optical configuration. These tools allow to analyse the wavefront in real time to extract the desired metrics and results.
Wafer overlay errors due to non-flatness and thickness variations of a mask need to be minimized to achieve a very accurate on-product-overlay (OPO). Due to the impact of overlay errors inherent in all reflective lithography systems, EUV reticles will need to adhere to flatness specifications below 10nm, which metric is not possible to achieve using current tooling infrastructure; current metric is showing Peak-to-Valley (PV) flatness of around 60nm. In this paper, we present a new method to generate a very high-resolution photomask shape measurement of an entire optical photomask used in DUV lithography, by measuring both from front side and backside, a technique based on detecting the wave front phase of the reflected light from a quartz photomask. We introduce Wave Front Phase Imaging (WFPI), a new method for measuring flatness that generates a shape map based on local slope. It collects 810 thousand (K) data points on an 86.4mm× 86.4mm area with a spatial resolution of 96μm.
Wave Front Phase Imaging (WFPI), a new wafer geometry technique, is presented, that acquires 16.3 million data points in 12 seconds on a full 300mm wafer, providing lateral resolution of 65μm while holding the wafer vertically. The flatness of the silicon wafers used to manufacture integrated circuits (IC) is controlled to tight tolerances to help ensure that the full wafer is sufficiently flat for lithographic processing. Advanced lithographic patterning processes require a detailed map of the free, non-gravitational wafer shape to avoid overlay errors caused by depth-of-focus issues. For a wafer shape system to perform in a high-volume manufacturing environment, repeatability is a critical measure that needs to be tested. We present WFPI as a new technique with high resolution and high data count acquired at very high-speed using a system where the wafer is free from the effects of gravity and with a very high repeatability as measured according to the Semi standards M49.
The optical characteristics of digitally recorded holographic optical elements by the wavefront recording method are measured with a Shack-Hartmann wavefront sensor and their performances as an optical element are compared with those of a spherical mirror and an analog holographic optical element with use of the reconstructed wavefront by Zernike polynomial. The comparison shows that the digitally recorded holographic optical elements can work as a spherical mirror/lens, but it introduces wavefront aberration much more than the mirror and the analog holographic optical element.
On-product overlay (OPO), with its continually shrinking overlay budget, remains a constraint in the continued effort at increasing device yield. Overlay metrology capability currently lags the need for improved overlay control, especially for multi-patterning applications. The free form shape of the silicon wafer is critical for process monitoring and is usually controlled through bow and warp measurements during the process flow. As the OPO budget shrinks, non-lithography process induced stress causing in plane distortions (IPD) becomes a more dominant contributor to the shrinking overlay budget. To estimate the wafer process induced IPD parameters after cucking the wafer inside the lithographic scanner, a high-resolution measurement of the freeform wafer shape of the unclamped wafer is needed. The free form wafer shape can then be used in a feed-forward prediction algorithm to predict both intra field and intra die distortions, as has been published by ASML, to minimize the need for alignment marks on the die and wafer and allows for overlay to be performed at any lithography layer. Up until now, the semiconductor industry has been using Coherent Gradient Sensing (CGS) interferometry or Fizeau interferometry to generate the wave front phase from the reflecting wafer surface. The wave front phase is then used to calculate the slope which again generates a shape map of the silicon wafer. However, these techniques have only been available for 300mm wafers. In this paper we introduce Wave Front Phase Imaging (WFPI), a new technique that can measure the free form wafer shape of a patterned silicon wafer using only the intensity of the reflected light. In the WFPI system, the wafer is held vertically to avoid the effects of gravity during measurements. The wave front phase is then measured by acquiring only the 2- dimensional intensity distribution of the reflected non-coherent light at two or more distances along the optical path using a standard, low noise, CMOS sensor. This method allows for very high data acquisition speed, equal to the camera’s shutter time, and a high number of data points with the same number of pixels as available in the digital imaging sensor. In the measurements presented in this paper, we acquired 7.3 million data points on a full 200mm patterned silicon wafer with a lateral resolution of 65μm. The same system presented can also acquire data on a 300mm silicon wafer in which case 16.3 million data points with the same 65μm spatial resolution were collected.
On product overlay (OPO) is one of the most critical parameters for the continued scaling according to Moore’s law. Without good overlay between the mask and the silicon wafer inside the lithography tool, yield will suffer1. As the OPO budget shrinks, non-lithography process induced stress causing in-plane distortions (IPD) becomes a more dominant contributor to the shrinking overlay budget2. To estimate the process induced in-plane wafer distortion after cucking the wafer onto the scanner board, a high-resolution measurement of the freeform wafer shape of the unclamped wafer with the gravity effect removed is needed. Measuring both intra and inter die wafer distortions, a feed-forward prediction algorithm, as has been published by ASML, minimizes the need for alignment marks on the die and wafer and can be performed at any lithography layer3. Up until now, the semiconductor industry has been using Coherent Gradient Sensing (CGS) interferometry or Fizeau interferometry to generate the wave front phase from the reflecting wafer surface to measure the free form wafer shape3,4,5. In this paper, we present a new method, Wave Front Phase Imaging (WFPI) for generating a very high-resolution wave front phase map of the light reflected off of the patterned silicon wafer surface. The wafer is held vertically to allow for the free wafer shape to be measured without having the wafer shape be impacted by gravity. We show data using a WFPI patterned wafer geometry tool that acquires 16.3 million data points on a 300mm patterned silicon wafer with 65μm spatial resolution using a total data acquisition time of 14 seconds.
Wave Front Phase Imaging (WFPI) is a new technique for measuring the free shape of a silicon wafer. To avoid the effects of gravity affecting the wafer shape, the silicon wafer is held vertically while measured using a custom made three-point wafer holder. The wave front phase is measuring using a non-coherent light source that is collimated and then reflected off the silicon wafer surface. The wave front phase is measured using a unique new method that only needs to record the intensity of the reflected light at two or more distances along the optical path. Since only intensity images are used to generate the phase, commercially available CMOS sensors with very high pixel count are used, which enables very high number of data points to be collected at the time required by the cameras shutter speed when using a dual camera setup with simultaneous image acquisition. In the current lab system, a single camera on a linear translation stage is used that acquires 16.3 million data points in 12 seconds, including the stage motion, on a full 300mm wafer providing lateral pixel resolution of 65μm. The flatness of the silicon wafers used to manufacture integrated circuits (IC) is controlled to tight tolerances to help ensure that the full wafer is sufficiently flat for lithographic processing. Advanced lithographic patterning processes require a detailed map of the free, non-gravitational wafer shape, to avoid overlay errors caused by depth-of-focus issues. We present WFPI as a new technique for measuring the free shape of a silicon wafer with high resolution and high data count acquired at very high-speed using a system where the wafer is held vertically without the effects of gravity.
Wave Front Phase Imaging (WFPI) is a new wafer shape measurement technique that acquires millions of data points in just seconds or less, on a full 300mm silicon wafer. This provides lateral resolution well below 100μm with the possibility of reaching the lens’ optical resolution limitation between 3-4μm. The system has high repeatability with root-mean-square (RMS) standard deviation (σRMS) in the single digit nm for the global wafer shape geometry and for nanotopography it reaches in the sub ångström (Å = 10-10 m) range. WFPI can collect data on the entire wafer to within a single pixel away from the wafer edge roll off1. The flatness of the silicon wafers used to manufacture integrated circuits (IC) is controlled to tight tolerances to help ensure that the full wafer is sufficiently flat for lithographic processing. Advanced lithographic patterning processes require a detailed map of the wafer shape to avoid overlay errors caused by depth-of-focus issues2. In this paper we go deep into the theoretical explanation as to how the wave front phase sensor works.
On product overlay (OPO) is one of the most critical parameters for the continued scaling according to Moore’s law. Without good overlay between the mask and the silicon wafer inside the lithography tool, yield will suffer. As the OPO budget shrinks, non-lithography process induced stress causing in plane distortions (IPD) becomes a more dominant contributor to the shrinking overlay budget. To estimate the process induced in-plane wafer distortion after cucking the wafer onto the scanner board, a high-resolution measurement of the freeform wafer shape of the unclamped wafer with the gravity effect removed is needed. Measuring both intra and inter die wafer distortions, a feed-forward prediction algorithm, as has been published by ASML, minimizes the need for alignment marks on the die and wafer and can be performed at any lithography layer. Up until now, the semiconductor industry has been using Coherent Gradient Sensing (CGS) interferometry or Fizeau interferometry to generate the wave front phase from the reflecting wafer surface to measure the free form wafer shape. In this paper, we present a new method to generate a very high-resolution wave front phase map of the reflected light from a patterned silicon wafer surface that can be used to generate the free form wafer shape. We show data using a WFPI patterned wafer geometry tool to acquire 3.4 million data points on a 200mm patterned silicon wafer with 96µm spatial resolution with a data acquisition time of 5 seconds.
Wave Front Phase Imaging (WFPI) is a new wafer shape measurement technique that acquires millions of data points in just seconds or less, on a full 300mm silicon wafer. This provides lateral resolution well below 100μm with the possibility of reaching the lens’ optical resolution limitation between 3-4μm. The system has high repeatability with root-mean-square (RMS) standard deviation (σRMS) in the single digit nm for the global wafer shape geometry and for nanotopography it reaches in the sub ångström (Å = 10-10 m) range. WFPI can collect data on the entire wafer to within a single pixel away from the wafer edge roll off1. The flatness of the silicon wafers used to manufacture integrated circuits (IC) is controlled to tight tolerances to help ensure that the full wafer is sufficiently flat for lithographic processing. Advanced lithographic patterning processes require a detailed map of the wafer shape to avoid overlay errors caused by depth-of-focus issues2. In this paper we go deep into the theoretical explanation as to how the wave front phase sensor works.
KEYWORDS: Sensors, Radon transform, Digital signal processing, Mobile devices, Detection and tracking algorithms, Image processing, Image segmentation, System on a chip, Lanthanum, Hough transforms
We propose a local bar-shaped structure detector that works in real time on high-resolution images.
It is based on the Radon transform. Specifically in the muti-scale variant, which is especially fast because it works in integer mathematics and does not use interpolation.
The Radon transform conventionally works on the whole image, and not locally. In this paper we describe how by stopping at the early stages of the Radon transform we are able to locate structures locally.
We also provide an evaluation of the performance of the algorithm running on the CPU, GPU and DSP of mobile devices to process at acquisition time the images coming from the device's camera.
We present a formal inversion of the multiscale discrete Radon trasform, valid both for 2D and 3D. With the transformed data from just one of the four quadrants of the direct 2D Radon transform, or one of the twelve dodecants, in case of 3D Radon transform, we can invert ex- actly and directly, with no iterations, the whole domain. The computational complexity of the proposed algorithms will be O(N log N). With N the total size of the problem, either square or cubic. But this inverse transforms are extremely ill conditioned, so the presence of noise in the transformed domain turns them useless. Still we present both algorithms, and characterize its weakness against noise.
The flatness of the silicon wafers used to manufacture integrated circuits (IC) is controlled to tight tolerances to help ensure that the full wafer is sufficiently flat for lithographic processing1. Chemical-Mechanical Planarization (CMP) is one of many processes outside the lithographic sector that will influence wafer flatness across each image lithographic exposure section field and across the wafer2. Advanced lithographic patterning processes require a detailed map of the wafer shape to avoid overlay errors caused by depth-of-focus issues1. In recent years, a metrology tool named PWG5TM (Patterned Wafer Geometry, 5th generation), based on using double Fizeau interferometry to generate phase changes from the interferometric pattern applied to the reflective surface, has been used to generate a wafer geometry map to correct for process induced focus issues as well as overlay problems2. In this paper we present Wave Front Phase Imaging (WFPI); a new patterned wafer geometry technique that measures the wave front phase utilizing two intensity images of the light reflected off the patterned wafer. We show that the 300mm machine acquires 7.65 million data points in 5 seconds on the full 300mm patterned wafer with a lateral resolution of 96μm.
Wave Front Phase Imaging (WFPI), a new wafer geometry technique, is presented, that acquires 7.65 million data points in 5 seconds on a full 300mm wafer providing lateral resolution of 96µm. The system has high repeatability with root-mean-square (RMS) standard deviation (σRMS) in the single digit nm for the global wafer geometry and in the sub ångström (Å = 10-10 m) range for the full-wafer nanotopography for both 200mm and 300mm blank silicon wafer. WFPI can collect data on the entire wafer to within a single pixel, in our case 96µm, away from the wafer edge roll off. The flatness of the silicon wafers used to manufacture integrated circuits (IC) is controlled to tight tolerances to help ensure that the full wafer is sufficiently flat for lithographic processing. Advanced lithographic patterning processes require a detailed map of the wafer shape to avoid overlay errors caused by depth-of-focus issues. We present WFPI as a new technique with high resolution and high data count acquired at very high speed.
Wave Front Phase Imaging (WFPI) is used to measure the stria on an artificial, transparent plate made of Schott N-BK7® glass material by accurately measuring the Optical Path Difference (OPD) map. WFPI is a new technique capable of reconstructing an accurate high resolution wave front phase map by capturing two intensity images at different propagation distances. An incoherent light source generated by a light emitting diode (LED) is collimated and transmitted through the sample. The resultant light beam carries the wave front information regarding the refraction index changes inside the sample1. Using this information, WFPI solves the Transport Intensity Equation (TIE) to obtain the wave front phase map. Topography of reflective surfaces can also be studied with a different arrangement where the collimated light beam is reflected and carrying the wave front phase, which again is proportional to the surface topography. Three Schott N-BK7® glass block samples were measured, each marked in which location the wave front phase measurement will be performed2. Although WFPI output is an OPD map, knowing the value of refractive index of the material at the wavelength used in the measurements will lead to also knowing the thickness variations of the plate.
The flatness of the silicon wafers used to manufacture integrated circuits (IC) is controlled to tight tolerances to help ensure that the full wafer is sufficiently flat for lithographic processing. Advanced lithographic patterning processes require a detailed map of the wafer shape to avoid overlay errors caused by depth-of-focus issues. A large variety of new materials are being introduced in Back-End of Lines (BEOL) to ensure innovative architecture for new applications. The standard in-line control plan for the BEOL layer deposition steps is based on film thickness and global stress measurements which can be performed on blanket wafers to check the process equipment performance. However, the challenge remains to ensure high performance metrology control for process equipment during high volume manufacturing. With the product tolerance getting tighter and tighter and architecture more and more complex, there is an increasing demand for knowledge of the wafer shape. In this paper we present Wave Front Phase Imaging (WFPI), a new wafer geometry technique, where 7.65 million data points were acquired in 5 seconds on a full 300mm wafer enabling a lateral resolution of 96μm.
Two algorithms are introduced for the computation of discrete integral transforms with a multiscale approach operating in discrete three-dimensional (3-D) volumes while considering its real-time implementation. The first algorithm, referred to as 3-D discrete Radon transform of planes, will compute the summation set of values lying in discrete planes in a cube that imitates, in discrete data, the integrals on two-dimensional planes in a 3-D volume similar to the continuous Radon transform. The normals of these planes, equispaced in ascents, cover a quadrilateralized hemisphere and comprise 12 dodecants. The second proposed algorithm, referred to as the 3-D discrete John transform of lines, will sum elements lying on discrete 3-D lines while imitating the behavior of the John or x-ray continuous transform on 3-D volumes. These discrete integral transforms do not perform interpolation on input or intermediate data, and they can be computed using only integer arithmetic with linearithmic complexity, thus outperforming the methods based on the Fourier slice-projection theorem for real-time applications. We briefly prove that these transforms have fast inversion algorithms that are exact for discrete inputs.
Points of view generation allows to create virtual views between two or more cameras observing a scene. This field receives attention from multimedia markets, because sufficiently realistic points of view generation should allow to navigate freely between otherwise fixed points of observation. The new views must be interpolated between sampled data, aided by geometrical information relating real cameras poses, objects in the scene and desired point of view. Normally there are several steps involved, globally known as Structure from Motion (SfM) method. Our study focuses on the last stage; image interpolation based on the disparities between known cameras. In this paper, a new method is proposed that uses depth maps generated by a single camera, named SEBI, allowing a more efficient filling in presence of occlusions. Occlusions are considered during interpolation, creating an occlusion-map and an uncertainty-map using the depth information that SEBI cameras provide.
In this paper we introduce a new metrology technique for measuring wafer geometry on silicon wafers. Wave Front Phase Imaging (WFPI) has high lateral resolution and is sensitive enough to measure roughness on a silicon wafer by simply acquiring a single image snapshot of the entire wafer. WFPI is achieved by measuring the reflected light intensity from monochromatic uncoherent light at two different planes along the optical path with the same field of view. We show that the lateral resolution in the current system is 24μm though it can be pushed to less than 5μm by simply adding more pixels to the image sensor. Also, we show that the amplitude resolution limit is 0.3nm. A 2-inch wafer was measured while laying on a flat sample holder and the roughness was revealed by applying a double Gaussian high pass filter to the global topography data. The same 2-inch wafer was also placed on a simulated robotic handler arm, and we show that even if gravity was causing extra bow on the wafer, the same roughness was still being revealed at the same resolution after a high pass filter was applied to the global wafer geometry data.
In this paper we show that Wave Front Phase Imaging (WFPI) has high lateral resolution and high sensitivity enabling it to measure nanotopography and roughness on a silicon wafer by simply acquiring a single image of the entire wafer. WFPI is achieved by measuring the reflected light intensity from monochromatic uncoherent light at two different planes along the optical path with the same field of view. We show that the lateral resolution in the current system used for these experiments is 24μm but can be pushed to less than 5μm by simply adding more pixels to the image sensor, and that the amplitude resolution limit is 0.3nm. Three 2-inch unpatterned silicon wafers were measured, and the nanotopography and roughness was revealed by applying a double Gaussian high pass filter to the global topography data.
We present a new wave front sensing technique based on detecting the propagating light waves. This allows the user to acquire millions of data points within the pupil of the human eye; a resolution several orders of magnitude higher than current industry standard ophthalmic devices. The first instrument was built and tested using standard calibration surfaces in addition to using an artificial eye. The paper then presents the first characterization of the optics of a real human eye measured using the newly developed high-resolution wave front phase sensing technique showing the complexity of the human eye’s ocular optics.
We present our latest advances in the design and implementation of a tunable automultiscopic display based on the tensor display model. A design comprising a three-layer display was introduced. In such design, front and rear layers were enabled to be controlled in a six-degree of freedom manner related to the central layer of the system. A calibration method consisting on displaying a checkerboard pattern in each layer was proposed. By computing the homography of these patterns with respect to the reference plane, it was possible to estimate the needed adjustments. An implementation based on such design was carried over and calibrated following the aforementioned technique. The obtained results demonstrated the feasibility of such implementation.
Discrete Radon transform is a technique that allows to detect lines in images. It is much lighter to compute than Radon transforms based on Fourier slice theorem that use FFT as basis computing block. Even then, it is not that prone to optimal fine grain parallelization due to the need of running 4 passes to mirrored and flipped versions of the input in order to compute the 4 quadrants comprising 45 degrees each that arises of the decomposition of discrete lines in slope-intercept form. A new method is proposed that can solve the 4 quadrants simultaneously allowing for a more efficient parallelization. In higher dimensions Radon transform needs even more ‘runs’ of the basic algorithm, v.g., in 3 dimensions instead of 4 quadrants there are 12 dodecants to be solved. The proposed method can be extended to alleviate also the problem in those higher dimensions achieving an even greater gain.
In this work we present a novel wave front phase sensing technique developed by Wooptix. This new wave front phase sensor uses only standard imaging sensor, and does not need any specialized optical hardware to sample the optical field. In addition, the wave front phase recovery is zonal, thus, the obtained wave front phase map provides as much height data points, as pixels in the imaging sensor. We will develop the mathematical foundations of this instrument as well as theoretical and practical limits. Finally, we will expose the application of this sensor to silicon wafer metrology and comparisons against industry standard metrology instruments.
In this work we have presented a brief insight into the capabilities of multilayer displays as to selectively display information in relation to the observers. We labeled the views of a light-field as blocked and non-blocked, and then a predefined text was assigned accordingly, modifying it to achieve a privacy criterion in the blocked case. Two ways to define the private views were presented. An evaluation of the output for both techniques was carried over in simulation, in both the spatial and frequency domain. Results showed that privacy was achievable and that each technique had an optimal operation point when taking into account the time-multiplexing capabilities of the multilayer display. Also, a trade-off between the quality of the blocked and non-blocked views was found.
The performance of the “weighted Fourier phase slope” centroiding algorithm at the subpupil image of a Shack–Hartmann wavefront sensor for point-like astronomical guiding sources is explored. This algorithm estimates the image’s displacement in the Fourier domain by directly computing the phase slope at several spatial frequencies, without the intermediate step of computing the phase; it then applies optimized weights to the phase slopes at each spatial frequency obtained by a Bayesian estimation method. The idea was inspired by cepstrum deconvolution techniques, and this relationship is illustrated. The algorithm’s tilt estimation performance is characterized and contrasted with other known centroiding algorithms, such as thresholded centre of gravity (TCoG) and cross correlation (CC), first through numerical simulations at the subpupil level, then at the pupil level, and finally at the laboratory test bench. Results show a similar sensitivity to that of the CC algorithm, which is superior to that of the TCoG algorithm when large fields of view are necessary, i.e., in an open-loop configured adaptive optics system, thereby increasing the guide star limiting magnitude by 0.6 to 0.7 mag. On the other side, its advantage over the CC algorithm is its lower computational cost by approximately an order of magnitude.
KEYWORDS: 3D displays, LCDs, Lanthanum, Optical engineering, Reconstruction algorithms, Signal to noise ratio, Multiplexing, 3D image processing, Translucency, Display technology
Tensor display is an option in glasses-free three-dimensional (3-D) display technology. An initial solution has to be set to decompose the light-field information to be represented by the system. We have analyzed the impact of the initial guess on the multiplicative update rules in terms of peak signal-to-noise ratio, and proposed a method based on depth map estimation from an input light field. Results from simulations were obtained and compared with previous literature. In our sample, the initial values used have a large influence on results and convergence to a local minimum. The quality of the output stabilizes after a certain number of iterations, suggesting that a limit on such numbers should be imposed. We show that the proposed methods outperform the pre-existing ones.
The discrete Radon transform, DRT, calculates, with linearithmic complexity, the sum of pixels through a set of discrete lines covering all possible slopes and intercepts in an image. In 2006, a method was proposed to compute the inverse DRT that remains exact and fast, in spite of being iterative. In this work the DRT pair is used to propose a Ridgelet and a Curvelet transform that perform focus measurement of an image. Then the shape from focus approach based on DRT pair is applied to a focal stack to create a depth map of a scene.
The Adaptive Optics Lucky Imager, AOLI, is an instrument developed to deliver the highest spatial resolution ever obtained in the visible, 20 mas, from ground-based telescopes. In AOLI a new philosophy of instrumental prototyping has been applied, based on the modularization of the subsystems. This modular concept offers maximum flexibility regarding the instrument, telescope or the addition of future developments.
The combination of Lucky Imaging with a low order adaptive optics system was demonstrated very successfully on the Palomar 5m telescope nearly 10 years ago. It is still the only system to give such high-resolution images in the visible or near infrared on ground-based telescope of faint astronomical targets. The development of AOLI for deployment initially on the WHT 4.2 m telescope in La Palma, Canary Islands, will be described in this paper. In particular, we will look at the design and status of our low order curvature wavefront sensor which has been somewhat simplified to make it more efficient, ensuring coverage over much of the sky with natural guide stars as reference object. AOLI uses optically butted electron multiplying CCDs to give an imaging array of 2000 x 2000 pixels.
In this paper, we use information from the light field to obtain a distribution map of the wavefront phase. This distribution is associated with changes in refractive index which are relevant in the propagation of light through a heterogeneous or turbulent medium. Through the measurement of the wavefront phase from a single shot, it is possible to make the deconvolution of blurred images affected by the turbulence. If this deconvolution is applied to light fields obtained by plenoptic acquisition, the original optical resolution associated to the objective lens is restored, it means we are using a kind of superresolution technique that works properly even in the presence of turbulence. The wavefront phase can also be estimated from the defocused images associated to the light field: we present here preliminary results using this approach.
Refocusing a plenoptic image by digital means and after the exposure has been thoroughly studied in the last years, but few efforts have been made in the direction of real time implementation in a constrained environment such as that provided by current mobile phones and tablets. In this work we address the aforementioned challenge demonstrating that a complete focal stack, comprising 31 refocused planes from a (256ff16)2 plenoptic image, can be achieved within seconds by a current SoC mobile phone platform. The election of an appropriate algorithm is the key to success. In a previous work we developed an algorithm, the fast approximate 4D:3D discrete Radon transform, that performs this task with linear time complexity where others obtain quadratic or linearithmic time complexity. Moreover, that algorithm does not requires complex number transforms, trigonometric calculus nor even multiplications nor oat numbers. Our algorithm has been ported to a multi core ARM chip on an off-the-shelf tablet running Android. A careful implementation exploiting parallelism at several levels has been necessary. The final implementation takes advantage of multi-threading in native code and NEON SIMD instructions. As a result our current implementation completes the refocusing task within seconds for a 16 megapixels image, much faster than previous attempts running on powerful PC platforms or dedicated hardware. The times consumed by the different stages of the digital refocusing are given and the strategies to achieve this result are discussed. Time results are given for a variety of environments within Android ecosystem, from the weaker/cheaper SoCs to the top of the line for 2013.
Modern astronomic telescopes take advantage of multi-conjugate adaptive optics, in which wavefront sensors play a key role. A single sensor capable of measuring wavefront phases at any angle of observation would be helpful when improving atmospheric tomographic reconstruction. A new sensor combining both geometric and plenoptic arrangements is proposed, and a simulation demonstrating its working principle is also shown. Results show that this sensor is feasible, and also that single extended objects can be used to perform tomography of atmospheric turbulence.
The plenoptic camera was originally created to allow the capture of the Light Field, a four-variable volume
representation of all rays and their directions, that allows the creation by synthesis of a 3D image of the observed
object. This method has several advantages with regard to 3D capture systems based on stereo cameras, since
it does note need frame synchronization or geometric and color calibration. And it has many applications, from
3DTV to medical imaging. A plenoptic camera uses a microlens array to measure the radiance and direction of
all the light rays in a scene. The array is placed at the focal plane of the objective lens, and the sensor is at
the focal plane of the microlenses. In this paper we study the application of our super resolution algorithm to
mobile phones cameras. With a commercial camera, it is already possible to obtain images of good resolution
and enough number of refocused planes, just placing a microlens array in front of the detector.
We present a geometric sensor to restore the local tip-tilt in a segmented surface using the Van Dam and Lane algorithm [M. A. van Dam and R. G. Lane, Appl. Opt.41(26), 5497–5502 (2002)]. The paper also presents an implementation of this algorithm using graphical processing units as specialized hardware. This compute unified device architecture implementation achieves real-time results inside the stability time of the atmosphere for resolutions of up to 1024×1024 pixels.
The plenoptic camera has been proposed as an alternative wavefront sensor adequate for extended objects within the
context of the design of the European Solar Telescope (EST), but it can also be used with point sources. Originated in the
field of the Electronic Photography, the plenoptic camera directly samples the Light Field function, which is the four -
dimensional representation of all the light entering a camera. Image formation can then be seen as the result of the
photography operator applied to this function, and many other features of the light field can be exploited to extract
information of the scene, like depths computation to extract 3D imaging or, as it will be specifically addressed in this
paper, wavefront sensing.
The underlying concept of the plenoptic camera can be adapted to the case of a telescope by using a lenslet array of the
same f-number placed at the focal plane, thus obtaining at the detector a set of pupil images corresponding to every
sampled point of view. This approach will generate a generalization of Shack-Hartmann, Curvature and Pyramid
wavefront sensors in the sense that all those could be considered particular cases of the plenoptic wavefront sensor,
because the information needed as the starting point for those sensors can be derived from the plenoptic image.
Laboratory results obtained with extended objects, phase plates and commercial interferometers, and even telescope
observations using stars and the Moon as an extended object are presented in the paper, clearly showing the capability of
the plenoptic camera to behave as a wavefront sensor.
We develop a new algorithm that extends the bidimensional fast digital radon transform from Götz and Druckmüller (1996) to digitally simulate the refocusing of a 4-D lightfield into a 3-D volume of photographic planes as previously done by Ng et al. (2005) but with the minimum number of operations. This new algorithm does not require multiplications just sums and its computational complexity is O(N4) to achieve a volume consisting of 2N photographic planes focused at different depths from a N4 plenoptic image. This reduced complexity allows for the acquisition and processing of a plenoptic sequence with the purpose of estimating 3-D shape at video rate. Examples are given of implementations on GPU and CPU platforms. Finally, a modified version of the algorithm to deal with domains of sizes different than a power of two is proposed.
Plenoptic cameras have been developed the last years as a passive method for 3d scanning, allowing focal stack capture
from a single shot. But data recorded by this kind of sensors can also be used to extract the wavefront phases associated
to the atmospheric turbulence in an astronomical observation.
The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated to the
turbulence. Na artificial Laser Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase
and the Optical Transfer Function of the system, but they are affected by defocus because of the finite distance to the
telescope.
Using the telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase
tomographically, taking advantage of the two principal characteristics of the plenoptic sensors at the same time: 3D
scanning and wavefront sensing. Then, the plenoptic sensors can be studied and used as an alternative wavefront sensor
for Adaptive Optics, particularly relevant when Extremely Large Telescopes projects are being undertaken.
In this paper, we will present the first observational wavefront phases extracted from real astronomical observations,
using punctual and extended objects, and we show that the restored wavefronts match the Kolmogorov atmospheric
turbulence.
Plenoptic cameras have been developed over the last years as a passive method for 3d scanning. Several superresolution
algorithms have been proposed in order to increase the resolution decrease associated with lightfield acquisition with a
microlenses array. A number of multiview stereo algorithms have also been applied in order to extract depth information
from plenoptic frames. Real time systems have been implemented using specialized hardware as Graphical Processing
Units (GPUs) and Field Programmable Gates Arrays (FPGAs).
In this paper, we will present our own implementations related with the aforementioned aspects but also two new
developments consisting of a portable plenoptic objective to transform every conventional 2d camera in a 3D CAFADIS
plenoptic camera, and the novel use of a plenoptic camera as a wavefront phase sensor for adaptive optics (OA).
The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated with the
turbulence. These changes require a high speed processing that justify the use of GPUs and FPGAs. Na artificial Laser
Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase and the Optical Transfer
Function of the system, but they are affected by defocus because of the finite distance to the telescope. Using the
telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase tomographically.
These advances significantly increase the versatility of the plenoptic camera, and provides a new contribution to relate
the wave optics and computer vision fields, as many authors claim.
In this work we develop a new algorithm, that extends the bidimensional Fast Digital Radon transform from
Götz and Druckmüller (1996), to digitally simulate the refocusing of a 4D light field into a 3D volume of
photographic planes, as previously done by Ren Ng et al. (2005), but with the minimum number of operations.
This new algorithm does not require multiplications, just sums, and its computational complexity is O(N4) to
achieve a volume consisting of 2N photographic planes focused at different depths, from a N4 plenoptic image.
This reduced complexity allows for the acquisition and processing of a plenoptic sequence with the purpose of
estimating 3D shape at video rate. Examples are given of implementations on GPU and CPU platforms. Finally,
a modified version of the algorithm to deal with domains of sizes different than power of two, is proposed.
Multi-Conjugate Adaptive Optics (MCAO) will play a key role in future astronomy. Every Extremely Large
Telescope (ELT) is being designed with its MCAO module, and most of their instruments will rely on that kind of
correction for their optimum performance. Many technical challenges have to be solved in order to develop MCAO
systems. One of them, related to its use on ELT's, is to find fast algorithms to perform the reconstruction at the
required speed. For that reason we have been studying the application of the Fourier Transform Reconstructor
(FTR) to MCAO. We use the Fourier Slice Theorem in order to reconstruct the atmospheric volume. The process
consists on reconstructing "slices" of atmosphere, taking 1D-FFT's of the different projections to build a 2D
Fourier space that is inverse-transformed to build the reconstructed slice. The advantage of using the FTR is
that this algorithm gives us directly the Fourier Transform of the projections, speeding up the process. To do a
good reconstruction it is necessary to know the height at which the laser guide star is focused, and we propose
to use a plenoptic camera to get this information, that we use together with the available information relative to
the atmosphere we are reconstructing, C2n, to weight the inverse-transforms and obtain a better estimate. The
height is obtained in real-time, a very important advantage for the reconstruction. We present the preliminary
results of our MCAO simulations and the configuration of the plenoptic camera that could be applied to an ELT
The use of AO in Extremely Large Telescopes, used to improve performances in smaller telescopes, becomes now
mandatory to achieve diffraction limited images according to the large apertures. On the other hand, the new dimensions
push the specifications of the AO systems to new frontiers where the order of magnitude in terms of computation power,
time response and the required numbers of actuators impose new challenges to the technology. In some aspects
implementation methods used in the past result no longer applicable. This paper examines the real dimension of the
problem imposed by ELTs and shows the results obtained in the laboratory for a real modal wavefront recovery
algorithm (Hudgin) implemented in FPGAs. Some approximations are studied and the performances in terms of
configuration parameters are compared. Also a preferred configuration will be justified.
The plenoptic wavefront sensor combines measurements at pupil and image planes in order to obtain wavefront
information from different points of view simultaneously, being capable to sample the volume above the telescope to
extract the tomographic information of the atmospheric turbulence. After describing the working principle, a laboratory
setup has been used for the verification of the capability of measuring the pupil plane wavefront. A comparative
discussion with respect to other wavefront sensors is also included.
A procedure has been developed to compute static aberrations after the PSF measured with the lucky imaging technique,
using a nearby star as the point source to probe the optical system. This PSF is iteratively turned into a phase map at the
pupil using the Gerchberg-Saxton algorithm, and then converted to the adequate actuation information for a deformable
mirror having low actuator number but large stroke capability.
The main advantage of this procedure is related with the capability of correcting static aberration at the specific pointing
direction and without the need of a wavefront sensor.
ELTs laser guide stars wavefront sensors are planned to have specifically developed sensor chips, which will probably
include readout logic and D/A conversion, followed by a powerful FPGA slope computer located very close to it, but not
inside for flexibility and simplicity reasons. This paper presents the architecture of an FPGA-based wavefront slope
computer, capable of handling the sensor output stream in a massively parallel approach. It will feature the ability of
performing dark and flat field correction, the flexibility needed for allocating complex processing schemes, the capability
of undertaking all computations expected to be performed at maximum speed, even though they were not strictly related
to the calculation of the slopes, and the necessary housekeeping controls to properly command it and evaluate its
behaviour. Feasibility using today's technology is evaluated, clearly showing its viability, together with an analysis of
the amount of external memory, power consumption and printed circuit board space needed.
The CAFADIS camera is a new sensor patented by Universidad de La Laguna (Canary Islands, Spain): international patent PCT/ES2007/000046 (WIPO publication number WO/2007/082975). It can measure the wavefront phase and the distance to the light source at the same time in a real time process. This could be really useful when using Adaptive Optics with Laser Guide Stars, in order to know the LGS height variations during the observation, or even the 3D LGS profile at Na layer.
The CAFADIS camera has been designed using specialized hardware: Graphical Processing Units (GPUs) and Field Programmable Gates Arrays (FPGAs). These two kinds of electronic hardware present an architecture capable of handling the sensor output stream in a massively parallel approach. Previous papers have shown their ability for AO in ELTs.
CAFADIS is composed, essentially, by a microlenses array at the telescope image space, sampling the image instead of the telescope pupil. Conceptually, when only 2x2 microlenses are presented it is very similar to the pyramid sensor. But in fact, this optical design can be used to measure distances in the object space using a variety of techniques.
Our paper shows a simulation of an observation using Na-LGS and Raylegh-LGS at the same time, where both of the LGS heights are accurately measured. The employed techniques are presented and future applications are introduced.
The CAFADIS camera is a new sensor patented by Universidad de La Laguna (Canary Islands, Spain): international
patent PCT/ES2007/000046 (WIPO publication number WO/2007/082975). It can measure the wavefront phase and the
distance to the light source at the same time in a real time process. It uses specialized hardware: Graphical Processing
Units (GPUs) and Field Programmable Gates Arrays (FPGAs). These two kinds of electronic hardware present an
architecture capable of handling the sensor output stream in a massively parallel approach. Of course, FPGAs are faster
than GPUs, this is why it is worth it using FPGAs integer arithmetic instead of GPUs floating point arithmetic.
GPUs must not be forgotten, as we have shown in previous papers, they are efficient enough to resolve several problems
for AO in Extremely Large Telescopes (ELTs) in terms of time processing requirements; in addition, the GPUs show a
widening gap in computing speed relative to CPUs. They are much more powerful in order to implement AO simulation
than common software packages running on top of CPUs.
Our paper shows an FPGA implementation of the wavefront phase recovery algorithm using the CAFADIS camera. This
is done in two steps: the estimation of the telescope pupil gradients from the telescope focus image, and then the very
novelty 2D-FFT over the FPGA. Time processing results are compared to our GPU implementation. In fact, what we are
doing is a comparison between the two different arithmetic mentioned above, then we are helping to answer about the
viability of the FPGAs for AO in the ELTs.
ELT laser guide star wavefront sensors are planned to handle an expected amount of data to be overwhelmingly large
(1600×1600 pixels at 700 fps). According to the calculations involved, the solutions must consider to run on specialized
hardware as Graphical Processing Units (GPUs) or Field Programmable Gate Arrays (FPGAs), among others.
In the case of a Shack-Hartmann wavefront sensor is finally selected, the wavefront slopes can be computed using
centroid or correlation algorithms. Most of the developments are designed using centroid algorithms, but precision ought
to be taken in account too, and then correlation algorithms are really competitive.
This paper presents an FPGA-based wavefront slope implementation, capable of handling the sensor output stream in a
massively parallel approach, using a correlation algorithm previously tested and compared to the centroid algorithm.
Time processing results are shown, and they demonstrate the ability of the FPGA integer arithmetic in the resolution of
AO problems.
The selected architecture is based in today's commercially available FPGAs which have a very limited amount of
internal memory. This limits the dimensions used in our implementation, but this also means that there is a lot of margin
to move real-time algorithms from the conventional processors to the future FPGAs, obtaining benefits from its
flexibility, speed and intrinsically parallel architecture.
The forthcoming Extremely Large Telescopes, and the new generation of Extreme Adaptive Optics systems, carry on a
boost in the number of actuators that makes the real-time correction of the atmospheric aberration computationally
challenging. It is necessary to study new algorithms for performing Adaptive Optics at the required speed. Among the
last generation algorithms that are being studied, the Fourier Transform Reconstructor (FTR) appears as a promising
candidate. Its feasibility to be used for Single-Conjugate Adaptive Optics has been extensively proved by Poyneer et
al.[1] As part of the activities supported by the ELT Design Study (European Community's Framework Programme 6)
we have studied the performance of this algorithm applied to the case of the European ELT, in two different cases:
single-conjugate and ground-layer adaptive optics and we are studying different approaches to apply it to the more
complex multi-conjugate case. The algorithm has been tested on ESO's OCTOPUS software, which simulates the
atmosphere, the deformable mirror, the sensor and the closed-loop control. The performance has been compared with
other algorithms as well as their response in the presence of noise and with various atmospheric conditions. The good
results on performance and robustness, and the possibility of parallelizing the algorithm (shown by Rodríguez-Ramos
and Marichal-Hernández) make it an excellent alternative to the typically used Matrix-Vector Multiply algorithm.
Large degree-of-freedom, real-time adaptive optics control requires reconstruction algorithms that are computationally efficient and readily parallelized for hardware implementation. Poyneer et al. [J. Opt. Soc. Am. A 19, 2100–2111 (2002)] have shown that the wavefront reconstruction with the use of the fast Fourier transform (FFT) and spatial filtering is computationally tractable and sufficiently accurate for its use in large Shack–Hartmann-based adaptive optics systems (up to 10,000 actuators). We show here that by the use of graphical processing units (GPUs), a specialized hardware capable of performing FFTs on big sequences almost 5 times faster than a high-end CPU, a problem of up to 50,000 actuators can already be done within a 6-ms limit. We give the method to adapt the FFT in an efficient way for the underlying architecture of GPUs.
Large degree-of-freedom real-time adaptive optics control requires reconstruction algorithms computationally
efficient and readily parallelized for hardware implementation. Lysa Poyneer (2002) has shown that the wavefront
reconstruction with the use of the fast Fourier transform (FFT) and spatial filtering is computationally
tractable and sufficiently accurate for its use in large Shack-Hartmann-based adaptive optics systems (up to
10,000 actuators). We show here that by use of Graphical Processing Units (GPUs), a specialized hardware
capable of performing FFTs on big sequences almost 7 times faster than a high-end CPU, a problem of up to
50,000 actuators can be already done within a 6 ms limit. The method to adapt the FFT in an efficient way for
the underlying architecture of GPUs is given.
We have developed a Shack-Hartmann sensor simulation, moving the complex amplitude of the electromagnetic field using Fast Fourier Transforms. The Shack-Hartmann sensor takes as input the atmospheric wavefront frames generated by the Roddier algorithm, and provides, as output, the subpupil images. The centroids and the wavefront phase maps are computed combining GPU and CPU.
The algorithms used on the GPU are written using nVidia language C for Graphics (Cg) and run on a CineFx graphical engine. Such a graphical engine provides a computational power several times greater than usual CPU-FPU combination, with a reduced cost. Any algorithm implemented on these engines must be previously adapted from their original form to fit the pipeline capabilities. To achieve an optimal performance, we compare the results with the same algorithm implemented on GPU and CPU.
We present here, for the first time, preliminary results on wavefront phase recovery using GPU. We have chose a zonal algorithm that fits better on the stream paradigm of the GPU's. The result shows a 10x speedup in the GPU centroid algorithm implementation and a 2x speedup in the phase recovery one compared with the same on CPU.
The PSF of a segmented mirror telescope is hardly affected by the segments alignment and it can cancel the performances of the Adaptive Optics Systems. The piston and tilt of each segment must be uniformly adjusted in relation to the rest of the segments. Furthermore, the direct detection of the alignment error with natural stars would be desirable in order to monitor the errors during the astronomical observation. We have studied the lost information of the piston error by the presence of atmospheric turbulence in the measurements of curvature and present a new algorithm to obtain the local piston using the curvature sensor. A phase wrapping effect is shown as responsible for the loss of curvature information to get the piston error map well enough; this happens not only in the presence of atmospheric turbulence, but also without it.
Segments alignment is one of the fundamental factors affecting the shape of the PSF of an optical segmented surface (composed by mirrors or lenses). Tilt and piston of each segment must be very well and uniformly adjusted in relation with the rest of segments. The Shack-Hartmann sensor is very efficient in detecting the local wavefront tilt, and the Curvature sensor is sensitive enough in order to detect local piston errors in the segmented mirrors (1). We show, with numerical simulations, how suches two sensors work in the presence of tilt and piston aberrations. Then, we propose a combine sensor that simultaneously senses the wavefront by Shack-Hartmann and Curvature techniques. An iterative process should improve the measures.
Segments alignment is one of the fundamental factors affecting the shape of the PSF of a segmented mirror telescope. Tilt and piston of each segment must be very well and uniformly adjusted in relation with the rest of the segments. The Hartmann-Shack sensor is very efficient in detecting the local wavefront tilt, but the task to take measurements of piston using this sensor is annoying enough specially when the wavefronts are affected by atmospheric turbulence. We show, with numerical simulations, that curvature sensing is sensitive enough in order to detect piston errors in the segmented mirrors, even in the presence of the atmospheric turbulence. It would permit the taking of measurements of piston with natural stars during observation time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.