Although head-up displays (HUDs) have already been installed in some commercial vehicles, their application to augmented reality (AR) is limited owing to the resulting narrow field of view (FoV) and fixed virtual-image distance. The matching of depth between AR information and real objects across wide FoVs is a key feature of AR HUDs to provide a safe driving experience. Meanwhile, current approaches based on the integration of two-plane virtual images and computer-generated holography suffer from problems such as partial depth control and high computational complexity, respectively, which makes them unsuitable for application in fast-moving vehicles. To bridge this gap, here, we propose a light-field-based 3D display technology with eye-tracking. We begin by matching the HUD optics with the light-field display view formation. First, we design mirrors to deliver high-quality virtual images with an FoV of 10 × 5° for a total eyebox size of 140 × 120 mm and compensate for the curved windshield shape. Next, we define the procedure to translate the driver eye position, obtained via eye-tracking, to the plane of the light-field display views. We further implement a lenticular-lens design and the corresponding sub-pixel-allocation-based rendering, for which we construct a simplified model to substitute for the freeform mirror optics. Finally, we present a prototyped device that affords the desired image quality, 3D image depth up to 100 m, and crosstalk level of <1.5%. Our findings indicate that such 3D HUDs can form the mainstream technology for AR HUDs.
A 10.1-inch 2D/3D switchable display using an integrated single light-guide plate (LGP) with a trapezoidal lightextraction (TLE) film was designed and fabricated. The integrated single LGP was composed of inverted trapezoidal line structures made by attaching a TLE film on its top surface and cylindrical lens structures on its bottom surface. The top surface of the TLE film was also bonded to the bottom surface of an LCD panel to maintain a 3D image quality, which can be seriously deteriorated by the gap variations between the LCD panel and the LGP. The inverted trapezoidal line structures act as slit apertures of parallax barriers for 3D mode. Light beams from LED light sources placed along the left and right edges of the LGP bounce between the top and bottom surfaces of the LGP, and when they collide with the inclined surfaces of the inverted trapezoidal structures, they are emitted toward the LCD panel. Light beams from LED light sources arranged on the top and bottom edges of the LGP are emitted to the lower surface while colliding with the cylindrical lens structures, and are reflected to the front surface by a reflective film for 2D mode. By applying the integrated single LGP with a TLE film, we constructed a 2D/3D switchable display prototype with a 10.1-inch tablet panel of WUXGA resolution (1,200×1,920). Consequently, we showed light-field 3D and 2D display images without interference artifacts between both modes, and also achieved luminance uniformity of over 80%. This display easily generates both 2D and 3D images without increasing the thickness and power consumption of the display device.
Nonideal stereo videos do not hinder viewing experience in stereoscopic displays. However, for autostereoscopic displays nonideal stereo videos are the main cause of reduced three-dimensional quality causing calibration artifacts and multiview synthesis artifacts. We propose an efficient multiview rendering algorithm for autostereoscopic displays that takes uncalibrated stereo as input. First, the epipolar geometry of multiple viewpoints is analyzed for multiview displays. The uncalibrated camera poses for multiview display viewpoints are then estimated by algebraic approximation. The multiview images of the approximated uncalibrated camera poses do not contain any projection or warping distortion. Finally, by the exploiting rectification homographies and disparities of rectified stereo, one can determine the multiview images with their estimated camera poses. The experimental results show that the multiview synthesis algorithm can provide results that are both temporally consistent and well-calibrated without warping distortion.
To commercialize glasses-free 3D display more widely, the display device should also be able to express 2D images without image quality degradation. Moreover, the thickness of display panel including backlight unit (BLU), and the power consumption should not be increased too much, especially for mobile applications. In this paper, we present a 10.1-inch 2D-3D switchable display using an integrated single light guide plate (LGP) without increasing the thickness and power consumption. The integrated single LGP with a wedge shape is composed of prismatic line patterns on its top surface and straight bump patterns on its bottom surface. The prismatic line patterns, which are composed of micro prisms having the light aperture on one side, act as slit apertures of parallax barriers for 3D mode. The linear bump patterns arranged along the vertical direction scatter the light uniformly together with the reflective film disposed under the LGP for 2D mode. LED light sources are arranged as edge-lit in the left and right sides of the LGP for 2D mode, and on the top edge of the LGP with the wider thickness for 3D mode. Display modes can be simply switched by turning on and off the LED light sources, alternatively. Applying the integrated single LGP, we realized a 2D-3D switchable display prototype with a 10.1-inch tablet panel of WQXGA resolution (2,560 × 1,600), and showed the light-field 3D display with 27-ray mapping and 2D display. Consequently, we acquired brightness uniformity over 70% for 2D and 3D modes.
Light-field displays are good candidates in the field of glasses-free 3D display for showing real 3D images without decreasing the image resolution. Light-field displays can create light rays using a large number of projectors in order to express the natural 3D images. However, in light-field displays using multi-projectors, the compensation is very critical due to different characteristics and arrangement positions of each projector. In this paper, we present an enhanced 55- inch, 100-Mpixel multi-projection 3D display consisting of 96 micro projectors for immersive natural 3D viewing in medical and educational applications. To achieve enhanced image quality, color and brightness uniformity compensation methods are utilized along with an improved projector configuration design and a real-time calibration process of projector alignment. For color uniformity compensation, projected images from each projector are captured by a camera arranged in front of the screen, the number of pixels based on RGB color intensities of each captured image is analyzed, and the distributions of RGB color intensities are adjusted by using the respective maximum values of RGB color intensities. For brightness uniformity compensation, each light-field ray emitted from a screen pixel is modeled by a radial basis function, and compensating weights of each screen pixel are calculated and transferred to the projection images by the mapping relationship between the screen and projector coordinates. Finally, brightness compensated images are rendered for each projector. Consequently, the display shows improved color and brightness uniformity, and consistent, exceptional 3D image quality.
In this paper, we proposed parallel processing method of 2 step wave field projection method using GPU. In the first step, 2D projection of wave field for 3D object is calculated by radial symmetric interpolation (RSI) method to the reference depth, and then in step 2 it is translated toward depth direction using Fresnel transformation. In each step, the object points are divided into small groups and processed in each CUDA cores in parallel. Experimental results show that proposed method is 5901 times faster than Rayleigh-Sommerfeld method for 1 million object points and full HD SLM resolution.
In this paper, we present a fast hologram pattern generation method to overcome accumulation problem of point source
based method. Proposed method consists of two steps. In the first step, 2D projection of wave field for 3D object is
calculated by radial symmetric interpolation (RSI) method to the multiple reference depth planes. Then in the second
step, each 2D wave field is translated toward SLM plane by FFT based algorithm. Final hologram pattern is obtained by
adding them. The effectiveness of method is proved by computer simulation and optical experiment. Experimental
results show that proposed method is 3878 times faster than analytic method, and 226 times faster than RSI method.
In this paper, we present a fast hologram pattern generation method by radial symmetric interpolation. In spatial domain,
concentric redundancy of each point hologram is removed by substituting the calculation of wave propagation with
interpolation and duplication. Also the background mask which represents stationary point in temporal domain is used to
remove temporal redundancy in hologram video. Frames are grouped in predefined time interval and each group shares
the background information, and hologram pattern of each time is updated only for the foreground part. The
effectiveness of proposed algorithm is proved by simulation and experiment.
In this paper, an inversion-free subpixel rendering method that uses eye tracking in a multiview display is proposed. The
multiview display causes an inversion problem when one eye of the user is focused on the main region and the other eye
is focused on the side region. In the proposed method, the subpixel values are rendered adaptively depending on the eye
position of the user to solve the inversion problem. Also, to enhance the 3D resolution without the color artifact, the
subpixel rendering algorithm using subpixel area weighting is proposed instead of the pixel values. In the experiments,
36-view images were seen using active subpixel rendering with the eye tracking system in a four-view display.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.