The current limitation in pixel count of a single spatial light modulator (SLM) is one of the technological hurdles that must be overcome to produce a holographic 3-D display with a large image size. A conventional approach is to tile subholograms that are predivided from a reconfigurable computer-generated hologram (CGH) with a high pixel count. We develop a new approach to achieve a 50 Mpixel display by tiling reconstructed subholograms computed from a predivided 3-D object. The tiling is done using a two-axis scanning mirror device with a new tiling sequence. A shutterless system design is also implemented to enable effective tiling of subholograms. A high-speed digital micromirror device (DMD) at 6 kHz with 1920×1080 pixels is utilized to reconstruct the subholograms. Our current system shows the potential to tile up to 120 subholograms, which corresponds to about 240 Mpixels. The approach we demonstrate gives a scalable solution to achieve a gigapixel-level display in the future.
This work outlines a system in which a stereo camera may effectively track a user's face and hands in three dimensions.
Given this information, a method for controlling objects in three dimensions is also described. The system begins by
finding faces. If more than one face is found in the image, the algorithm uses depth information to isolate the face that is
closest to the camera. The algorithm then gathers information about the user's skin tone by examining the content of the
face found. For much of the processing, only the hue and saturation components are used after applying an HSV to RGB
transformation given the camera output. The skin tone information in tandem with depth is then used to isolate the user's
hands, and track them in three dimensions. To be used as an effective interface, the system uses information of the two
hands relative to the user's face. In controlling an object in three dimensions, if the user would like to move the object
up, he or she simply positions both hands above his or her face. Similar commands allow the user to apply a translational
factor in three dimensions, as well as applying yaw and roll when wanted.
This work describes the process of developing a 3D Virtual Reality (VR) DJ simulation game intended to be displayed
on a stereoscopic display. Using a DLP projector and shutter glasses, the user of the system plays a game in which he or
she is a DJ in a night club. The night club's music is playing, and the DJ is "scratching" in correspondence to this music.
Much in the flavor of Guitar Hero or Dance Dance Revolution, a virtual turntable is manipulated to project information
about how the user should perform. The user only needs a small set of hand gestures, corresponding to the turntable
scratch movements to play the game. As the music plays, a series of moving arrows approaching the DJ's turntable
instruct the user as to when and how to perform the scratches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.