In this paper, we expand the eyebox size of lens-less holographic near-eye-display (NED) using passive eyebox replication technique that incorporates the spatial light modulator (SLM) and a holographic optical element (HOE). In holographic NEDs, the space-bandwidth product (SBP) of the SLM determines the exit pupil dimensions and corresponding eyebox size. The base eyebox is replicated in horizontal direction by using the horizontal high-order diffractions of the SLM under spherical wave illumination and multiplexed HOE combiner. The HOE combiner is used as a see-through reflective screen for the projected holographic virtual image, and it is fabricated based on two spherical divergent waves recording condition. When a digital blazed grating and a digital lens phase are added to the computed phase hologram sent to the SLM, two spatially separated, horizontal high-order diffraction terms with identical intensity and information can be used for eyebox expansion. When the eyebox size is expanded, the field-of-view (FOV) is not sacrificed; spherical divergence wave illumination alleviates the need for a tradeoff between FOV and eyebox size. Astigmatism distortion introduced during the HOE fabrication was counterbalanced by pre-correcting the target image using a computer-generated, holographic computation algorithm. The experimental results prove that the proposed prototype system is simple and effective to achieve distortion-free reconstruction of 3D virtual image and eyebox extension of lens less holographic NED.
We propose an advanced holographic see-through display system with 3D/2D switchable modes based on a liquid-crystalline lens array and a one-shot learning model. The liquid-crystalline lens array switches its role that act like a lens array or glass, according to the state of the electrical polarizer. When the switch of an electrical polarizer is on-state, the camera captures the image of a real-world object, and a one-shot learning model estimates the depth data from the captured image. Then, the 3D model is regenerated based on both color and depth images; and the elemental image array is generated and displayed on the microdisplay while the liquid-crystalline lens array reconstructs as a 3D image. On the other hand, when the switch of the electrical polarizer is off-state, the camera captures the image of a real-world object and is directly displayed by the microdisplay, while the liquid-crystalline lens array simply transmits it to the holographic combiner. The experimental results confirmed that the proposed system can be an advantageous way to implement the 3D/2D switchable holographic see-through system.
KEYWORDS: Cameras, Image acquisition, 3D modeling, 3D acquisition, 3D displays, Integral imaging, Image processing, Deep learning, 3D image processing, Digital cameras
In this report, we proposed an advanced integral imaging 3D display system using a simplified high-resolution light field image acquisition method. A simplified light field image acquisition method consists of a minimized number of cameras (three cameras placed along the vertical axis) to acquire the high-resolution perspectives of a full-parallax light field image. Since the number of cameras is minimized, the number of perspectives (3×N) and the specifications of the 3D integral imaging display unit (N×N elemental lenses) cannot be matched. It is possible to utilize the additional intermediate-view elemental image generation method in the vertical axis; however, the generation of the vertical viewpoints as many as the number of elemental lenses is a quite complex process and requires huge computation/long processing time. Therefore, in this case, we use a pre-trained deep learning model, in order to generate the intermediate information between the vertical viewpoints. Here, the corrected perspectives are inputted into a custom-trained deep learning model, and a deep learning model analyzes and renders the remaining intermediate viewpoints along the vertical axis, 3×N → N×N. The elemental image array is generated from the newly generated N×N perspectives via the pixel rearrangement method; finally, the full-parallax and natural-view 3D visualization of the real-world object is displayed on the integral imaging 3D display unit.
The waveguide-type full-color 3D-AR display system based on the integral imaging technique using the holographic mirror array is proposed. In the experiment, the AR feature has been successfully verified that the real-world scene and reconstructed virtual full-color 3D image were observed simultaneously.
his report proposes a three-dimensional/two-dimensional switchable augmented-reality display system using a liquid crystalline lens array and an electrical polarizer. A depth camera that is connected to the proposed augmented-reality display system acquires the three-dimensional or two-dimensional information of the real objects. Here, the dual function liquid-crystalline lens array is switched its function according to the polarizing directions of an electrical polarizer. The proposed system's overall procedure is as follows: the depth camera captures the depth/color, or only color image according to the switcher of a polarizer, and the three-dimensional or two-dimensional images are displayed separately on the augmented-reality display system. It gives an opportunity that three-dimensional and two-dimensional modes can be switched automatically. In the two-dimensional mode, the captured color image of a real object is displayed directly. In the three-dimensional mode, the elemental image array is generated from the depth and color images and reconstructed as a three-dimensional image by the liquid-crystalline microlens array of a proposed augmented-reality display system. Even the proposed system cannot be implemented the real-time display in the three dimensional mode, the direction-inversed computation method generates the elemental image arrays of the real object within a possible short time.
The improvement of holographic waveguide-type two-dimensional/three-dimensional (2D/3D) convertible augmentedreality (AR) display system using the liquid-crystalline polymer microlens array (LCP-MA) with electro-switching polarizer is proposed. The LCP-MA has the properties such as a small focal ratio, high fill factor, low driving voltage, and fast switching speed, which utilizes a well-aligned reactive mesogen on the imprinted reverse shape of the lens and a polarization switching layer. In the case of the holographic waveguide, two holographic optical elements (HOE) films are located at the input and output parts of the waveguide. These two HOEs have functions like mirror and magnifiers. Therefore, it reflects the transmitted light beams through the waveguide to the observer's eye as the reconstructed images. The proposed system has some common features like holographic AR display’s lightweight, thin size, and the observer can see the 2D/3D convertible images according to the direction of the electro-switching polarizer, with the real-world scenes at the same time. In the experiment, the AR system has been successfully verified that the real-world scene and reconstructed 2D/3D images were observed simultaneously.
A holographic stereogram printing system is a valuable method to output the natural-view holographic three-dimensional images. Here, the 3D information of the object such as parallax and depth information, are encoded into the elemental holograms, i.e. hogels, and recorded onto the holographic material via the laser illumination of the holographic printing process. However, according to the low resolution of the hogels, the quality of the printed image is reduced. Therefore, in this paper, we propose the real object-based fully automatic high-resolution light field image acquisition system using the one-directional moving camera array and smart motor-driven stage. The proposed high-resolution light field image acquisition system includes interconnected multiple cameras with one-dimensional configuration, the multi-functional smart motor and controller, and the computer-based integration between the cameras and smart motor. After the user inputs the main parameters such as the number of perspectives and distance/rotation between each neighboring perspectives, the multiple cameras capture the high-resolution perspectives of the real object automatically, by shifting and rotating on the smart motor-driven stage, and the captured images are utilized for the hogel generation of the holographic stereogram printing system. Finally, the natural-view holographic three-dimensional visualization of the real-object is outputted on the holographic material through the holographic stereogram printing system. The proposed method verified through the optical experiment, and the experimental results confirmed that the proposed onedimensional moving camera array-based light field image system can be an effective way to acquire the light field images for holographic stereogram printing.
We proposed a full-color three-dimensional holographic waveguide-type augmented-reality display system based on integral imaging using the holographic optical element-mirror array. As same as the conventional holographic waveguide, two holographic optical elements are utilized as in- and out-couplers that are located at the input and output parts of the waveguide. The main roles of these films are that reflecting the light beams come from the microdisplay into the waveguide, transmitting the reconstructed by the HOE-MA, three-dimensional image while a reflecting to the observer’s eye. In the experiment, the augmented-reality feature has been successfully verified that the real-world scene and reconstructed virtual three-dimensional image were observed simultaneously.
We proposed an effective method of digital content generation for the holographic printer using the integral imaging technique. In order to print the three-dimensional (3D) holographic visualizations of the given object, a printed hologram consists of an array of sub-hologram (hogels) should be generated, before the hardware system of the holographic printer is run. There are mainly three parts related to the digital content generation. In the first part, the acquisition of the 3D point cloud object is applied and the second part provided an encoding of directional information extracted from the 3D object. The array of hogel is generated by implementing direction inversed computer-generated integral imaging plus phasemodulation for improvement of the content generation, and displayed on the reflective phase-only spatial light modulator (SLM) then recorded onto holographic material one-by-one in sequence, while motorized X-Y translation stage shifts the holographic material; so, the full-parallax holographic stereogram (HS) is printed on the holographic material and 3D visualization of the object is successfully observed. Numerical simulation and optical reconstructions are verified effective computation and image quality respectively.
We proposed a three-dimensional (3D) holographic waveguide-type augmented reality (AR) system based on integral imaging using the mirror array. As same with the conventional holographic waveguide, two holographic optical element (HOE) films are utilized as in- and out-couplers, that are located at the input and output parts of the waveguide. The main role of the in-coupler HOE is that reflecting the light beams come from the micro display into the waveguide, and out-coupler reflects the transmitted light beams through the waveguide to the observer eye. On the basic of the main advantages of conventional holographic waveguide structure such as the light-weight and thin-size, the proposed system has an additional critical advantage that the observer can see the realistic 3D visualizations reconstructed by the outcoupler HOE-mirror array (HOE-MA), instead of simple two-dimensional images, with the real-world scenes at same time. In the experiment, the AR feature has been successfully verified that the real-world scene and reconstructed virtual 3D image were observed simultaneously.
In this paper, we propose the well-enhancing method for the resolution of the reconstructed image of the mobile threedimensional integral imaging display system. A mobile 3D integral imaging display system is a valuable way to acquire the 3D information of real objects and display the realistic 3D visualizations of them on the mobile display. Here, the 3D color and depth information are acquired by the 3D scanner, and the elemental image array (EIA) is generated from the acquired 3D information virtually. However, the resolution of the EIA is quite low due to the low-resolution of the acquired depth information, and it affects the final reconstructed image resolution. In order to enhance the resolution of reconstructed images, the EIA resolution should be improved by increasing the number of elemental images, because the resolution of the reconstructed image depends on the number of elemental images. For the comfortable observation, the interpolation process should be iterated at least twice or three times. However, if the interpolation process is iterated more than twice, the reconstructed image is damaged, and the quality is degraded considerably. In order to improve the resolution of reconstructed images well, while maintaining the image quality, we applied the additional convolutional super-resolution algorithm instead of the interpolation process. Finally, the 3D visualizations with a higher resolution and fine-quality are displayed on the mobile display.
KEYWORDS: 3D image reconstruction, 3D image processing, Integral imaging, 3D displays, 3D acquisition, 3D modeling, 3D scanning, Cameras, Image quality, Mobile devices
In this paper, we focused on the improvement of reconstructed image quality of the mobile three-dimensional display using the computer-generated integral imaging. The three-dimensional scanning method is applied instead of capturing the depth image in the acquisition step, and much more accurate three-dimensional view information (parallax and depth) can be acquired compared with the previous mobile three-dimensional integral imaging display, and the proposed system can reconstruct clearer three-dimensional visualizations of real-world objects. Here, the three-dimensional scanner acquires the three-dimensional parallax and depth information of the real-world object by the user. Then, the entire acquired data is organized and the three-dimensional the virtual model is generated based on the acquired data, and the EIA is generated from the virtual three-dimensional model. Additionally, in order to enhance the resolution of the elemental image array, an intermediate-view elemental image generation method is applied. Here, five intermediateview elemental images are generated between each four-original neighboring elemental image according to the pixel information, at least, the resolution of the generated elemental image array is enhanced almost four times than original. When the three-dimensional visualizations of real objects are reconstructed from the elemental image array with enhanced resolution, the quality can be improved quite comparing with the previous mobile three-dimensional imaging system. The proposed method is verified by the real experiment.
A point light source (PLS) display with enhanced viewing angle (VA) is proposed. The maximum VA of a conventional PLS display is equal to the propagation angle of the PLS, so a light-source array (3×3) was used to enlarge the propagation angle of the PLS in the horizontal and vertical directions. The number of converging elemental image points increases due to the large propagation angle of the PLS; thus, the VA of the integrated point was enhanced. From the experimental results, the VA of the proposed method was 2.6 times larger than the maximum VA of a conventional PLS display.
KEYWORDS: 3D modeling, 3D image reconstruction, Cameras, 3D image processing, Integral imaging, Data modeling, Clouds, Imaging systems, Image quality, 3D displays
An integral imaging system using a polygon model for a real object is proposed. After depth and color data of the real object are acquired by a depth camera, the grid of the polygon model is converted from the initially reconstructed point cloud model. The elemental image array is generated from the polygon model and directly reconstructed. The polygon model eliminates the failed picking area between the points of a point cloud model, so at least the quality of the reconstructed 3-D image is significantly improved. The theory is verified experimentally, and higher-quality images are obtained.
KEYWORDS: Clouds, 3D image processing, Cameras, 3D displays, Adaptive optics, 3D modeling, Image quality, Digital micromirror devices, Image resolution, Mirrors
A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system’s angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.
Viewing angle enhanced integral imaging (II) system using multi-directional projections and elemental image (EI) resizing method is proposed. In this method, each elemental lens of micro lens array collects multi-directional illuminations of multiple EI sets and produces multiple point light sources (PLSs) at the different positions in the focal plane; and the positions of the PLSs can be controlled by the projection angles. The viewing zone is made consisting of multiple diverging ray bundles, wider than the conventional method, due to multi-directional projections of multiple EI sets; whereas a conventional system produces a viewing zone using only a single set of EI projection. Hence the viewing angle of the reconstructed image is enhanced.
KEYWORDS: Integral imaging, Displays, LCDs, Cameras, Parallel processing, Parallel computing, Image processing, 3D image processing, 3D image reconstruction, 3D displays
A depth camera has been used to capture the depth data and color data for real-world objects. As an integral imaging display system is broadly used, the elemental image array for the captured data needs to be generated and displayed on liquid crystal display. We proposed a real-time integral imaging display system using image processing to simplify the optical arrangement and graphics processing unit parallel processing to reduce the time for computation. The proposed system provides elemental images generated at a rate of more than 30 fps with a resolution of 1204×1204 pixels , where the size of each display panel pixel was 0.1245 mm, and an array of 30×30 lenses , where each lens was 5×5 mm .
KEYWORDS: 3D image processing, 3D displays, Mirrors, Integral imaging, Projection systems, Fresnel lenses, Digital micromirror devices, 3D vision, Diffusers, Image resolution
We propose full-parallax integral imaging display with 360 degree horizontal viewing angle. Two-dimensional (2D)
elemental images are projected by a high-speed DMD projector and integrated into three-dimensional (3D) image by a
lens array. The anamorphic optic system tailors the horizontal and vertical viewing angles of the integrated 3D images in
order to obtain high angular ray density in horizontal direction and large viewing angle in vertical direction. Finally, the
mirror screen that rotates in synchronization with the DMD projector presents the integrated 3D images to desired
direction accordingly. Full-parallax and 360 degree horizontal viewing angle 3D images with both of monocular and
binocular depth cues can be achieved by the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.