A real-time method for rendering integral photography (IP) that uses the extended fractional view technique is described.
To make an IP image by using CG technology, hundreds of still pictures from different camera positions need to be
taken, and the images have to be synthesized by using other software. This is because CG applications do not have a
special rendering mode that is required for the extended fractional view approach when the directions of the rays are not
uniform. Hence, considerable processing time is needed to synthesize one IP image using the method, which is not
suitable for real-time applications such as games. Therefore, new high-speed rendering software was written using C++.
It runs on a typical Windows PC. Its main function is to trace the path of each ray, which is emitted from each subpixel
of a liquid crystal display (LCD) and refracted by a fly's eye lens. A subpixel is used instead of a pixel because a pixel
on a color LCD is made up of three subpixels, one each for red, green and blue, and their positions are different. If there
is an object on either side of the extension line of the ray, the coordinates of the intersection are calculated, and visibility
is determined by z-buffering. If the intersection is visible, the color is acquired and pasted on the subpixels of the LCD. I
confirmed that a simple 3D moving object, which consists of several polygons, could be rendered at more than two
frames per second, and a full-parallax moving image could be obtained by using IP.
We have developed a new integral photography (IP) system that incorporates a hexagonal fly's eye lens sheet to create a
fractional view. In a fractional view, the ratio between the lens and pixel pitches of the IP image is intentionally chosen
to be a non-integer so that the directions of all the rays emitted from each pixel on the LCD panel located behind the
sheet become quasi-random. Creating a fractional view simultaneously increases the effective number of individual
views and the resolution of each view. Furthermore, initial production costs can be decreased because the fractional view
can be created using inexpensive off-the-shelf lens sheets together with a variety of common flat panel displays that have
different pixel pitches. The difference in pitch is compensated for using computer software. The problem is that
fractional views were originally only used with lenticular-lens based displays that have a horizontal parallax; therefore,
some extension is necessary if fractional views are to be used with displays that have a full parallax. Furthermore, a
typical flat panel display, such as an LCD, consists of RGB subpixels that are in positions that are slightly shifted relative
to each other. We have developed a way of extending existing fractional views in order to cope with the full parallax
obtained by a fly's eye lens sheet and the pixel shift. We demonstrated that good binocular vision can be obtained when
using two hexagonal fly's eye lens sheets that were made without any relation to an LCD.
We developed an integral photography (IP) system that is suitable for small-lot production and is, in principle, applicable
to animation. IP is an ideal 3D display method because users can see stereoscopic 3D images from arbitrary directions.
However, IP is less popular than lenticular display using only the horizontal parallax, probably because the initial cost of
designing and producing a fly's eye lens is very high. We used two technologies to solve this problem. First, we used two
mutually perpendicular lenticular sheets instead of a fly's eye lens sheet. A lenticular sheet is much less expensive than a
fly's eye lens because it is easier to produce. Second, we used the fractional view method, in which the ratio of lens pitch
to pixel pitch is not limited to simple integer ratios, which means that no custom-made lenticular lens is necessary.
However, the original fractional view method is applicable only to horizontal parallax. We made it applicable to both
horizontal and vertical parallaxes by using two mutually perpendicular lenticular sheets. In addition, we developed a
simple technique for generating dedicated synthesized images for IP.
KEYWORDS: 3D image processing, 3D displays, Cameras, Video, Image quality, Stereoscopic displays, Internet, Image processing, 3D video streaming, Local area networks
This paper presents a technique to display real-time 3-D images captured by web cameras on the stereoscopic display of a personal computer (PC) using screen pixel access. Images captured by two side-by-side web cameras are sent through the Internet to a PC and displayed in two conventional viewers for moving images. These processes are carried out independently for the two cameras. The image data displayed in the viewer are in the video memory of the PC. Our method uses this video-memory data, i.e., the two web-camera images are read from the video memory, they are composed as a 3-D image, and then it is written in the video memory again. A 3-D image can be seen if the PC being used has a 3-D display. We developed an experimental system to evaluate the feasibility of this technique. The web cameras captured up to 640 × 480 pixels of an image, compressed it with motion JPEG, and then sent it over a LAN. Using an experimental system, we evaluated that the 3-D image had almost the same quality as a conventional TV image by using a broadband network like ADSL.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.