PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
A normal developing child will achieve emmetropia in youth and maintain it. Thereby cornea, lens and axial length of the eye grow astonishingly coordinated. In the last years research has evidenced that this coordinated growing process is a visually controlled closed loop. The mechanism has been studied particularly in animals. It was found that the growth of the axial length of the eyeball is controlled by image focus information from the retina. It was shown that maladjustment can occur by this visually-guided growth control mechanism that result in ametropia. Thereby it has been proven that e.g. short-sightedness is not only caused by heredity, but is acquired under certain visual conditions. It is shown that these conditions are similar to the conditions of viewing stereoscopic displays where the normal accommodation convergence coupling is disjoint. An evaluation is given of the potential of damaging the eyes by viewing stereoscopic displays. Concerning this, different viewing methods for stereoscopic displays are evaluated. Moreover, clues are given how the environment and display conditions shall be set and what users shall be chosen to minimize the risk of eye damages.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Despite many benefits that stereoscopic displays are known to have, there is evidence that stereoscopic displays can potentially cause discomfort to the viewer. The experiment reported in this paper was motivated by the need to quantify the potential subjective discomfort of viewing stereoscopic TV images. Observers provided direct subjective ratings of eye strain and quality in response to stereoscopic still images that varied in camera separation, convergence distance and focal length. Display duration of the images was varied between 1 an d15 seconds. Before and after the experiment, observers filled out a symptom checklist to assess any subjective discomfort resulting from the total experiment. Reported eye strain was on average around 'perceptible, but not annoying' for natural disparities. As disparity values increased reported eye strain increased to 'very annoying' and quality rating solved off and eventually dropped. This effect was most pronounced for the stereoscopic images that were produce using a short convergence distance. This effect may be attributed to an increase in keystone distortion in this condition. No significant effect of display duration was found. The results of the symptom checklist showed a slight increase in reported negative side-effects, with most observers reporting only mild symptoms of discomfort. Finally, our results showed that subjective stereoscopic image quality can be described as a function of reported eye strain and perceived depth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This communication describes an augmented reality application that merges real traffic scenes and their virtual models, then records a driver's actions as he drives through the resulting realistic simulation with an HMD and simulated car cockpit. Two calibrated cameras record stereoscopic videos from a vehicle driving on a real road. The velocity, relative and absolute position of the car are recorded synchronously with the corresponding video frame using DGPS and an odometer. The virtual world is built based on a known topographic map of the traffic scenario with a resolution of less than 10 cm. The information collected by the sensors and a feature location process on the image- frame using computer vision allows the geometric correspondence of the two worlds. The paper shows that using camera calibration and data filtering allows a car to be posited in the virtual world with a resolution of less than 10 cm moving at 70 Km/h. This application is focused on recording the behavior of a driver who sees situations that could eventually occur on a road lane. Important behaviors include: how a driver understands the position and orientation of traffic signals, how much attention he pays to traffic signals, his reaction time when a traffic lamp changes at a critical instant or a car suddenly breaks in front of him.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The bandwidth required to transmit stereoscopic video signals is nominally twice that required for standard, monoscopic images. An effective method of reducing the required bandwidth is to code the two video streams asymmetrically. We assessed the impact of this bandwidth- reduction technique on image quality and overall sensation of depth. Images from the right-eye stream were spatially filtered on image quality and overall sensation of depth. Images from the right-eye stream were stream were spatially filtered to half and quarter resolution. Subsequently, the images were processed using an MPEG-2 codec at bit-rates of 6, 2, and 1 Mbit/s. Subjects assessed image quality and depth using a double-stimulus, continuous-quality scale method. It was found that perceived depth was relatively robust to spatial filtering and bit-rate reduction. Image quality was affected more by bit-rate reduction than by spatial filtering and, at the lower bit rates, ratings were much higher for stereoscopic than for non-stereoscopic sequences. The results indicate that asymmetrical coding of stereoscopic sequences can be an effective means of reducing bandwidth for storage and transmission.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During vergence eye movements, the effective separation between the two eyes varies because the nodal point of each eye is offset from the center of rotation. As a result, the projected distance of a binocularly presented virtual object changes as the observer converges and diverges. A model of eye and stimulus position illustrates that if an observer converges toward a binocular virtual stimulus that is fixed on the display, the projected stimulus will shift outward away from the observer. Conversely, if the observer diverges toward a binocular virtual stimulus that is fixed on the display, the projected stimulus will shift inward. For example, if an observer diverges from 25 cm to 300 cm, a binocular virtual stimulus projected at 300 cm will shift inward to 241 cm. Accurate depiction of a fixed stimulus distance in a binocular display requires that the stimulus position on the display surface should be adjusted in real- time to compensate for the observer's eye movements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A set of fast algorithms for image adjustment and stereo matching directed to the improvement of image quality, quantitative measurements of retinal topography, and automatic detection of different pathologies is discussed. Usually automatic evaluations of the retina are performed with scanning laser systems, which use multiple laser scans to render 3 D volumes and extract depth information for the retinal features. Similar results can be achieved with regular fundus cameras. There are three steps to the proposed method. The most important step is the adjustment of color, brightness and contrast of the stereo images. The mean and the variance of each color component are calculated and equalized along the longitudes of the eyeball. The next step is the epipolar line adjustment in the two images. The algorithm is based on the fast estimation of the epipolar geometry in the left and right parts of the images and further nonlinear line-to-line adjustment of the stereo pair. During the third step occlusion errors are eliminated and a disparity map calculated. It is done by a combination of the classical correlation matching along the adjusted epipolar lines and by a new technique based on a double search in the occlusion areas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A 3D display system for medical image by computer-generated integral photography (IP) has been developed. A new, fast, 3D-rendering algorithm has been devised to overcome the difficulties that have prevented practical application of computer-generated IP, namely, the cost of computation, and the pseudoscopic image problem. The display system as developed requires on ly a personal computer, a liquid crystal display (LCD), and a fly's eye lens (FEL). Each point in 3D space is reconstructed by the convergence of rays from many pixels on the LCD through the FEL. As the number of such points is limited by the low resolution of the LCD, the algorithm computes a coordinate of the best point for each pixel of the LCD. This reduces computation, performs hidden surface removal and solves the pseudoscopic image problem. In tests of the system, the locations of images projected 10-40 mm distant from the display were found to be less than 2.5 mm in error. Both stationary and moving IP images of a colored skull, generated from 3D computerized tomography, were projected and could be observed with motion parallax within 10 degrees, both horizontally and vertically, from the front of the display. It can be concluded that the simplicity of design and the geometrical accuracy of projection give this system significant advantages over other 3D display methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While many tools for preoperative planning and simulation of surgical interventions are available, the surgical procedure itself still lacks the computer based assistance. In this paper we present an approach for closing this gap using Augmented Reality techniques. The idea is to use a see- through head-mounted display for the superimposition of a patient with virtual dat. This technique enables surgeons to visualize and to re-use preoperatively calculated data directly in the operation field. At the Institute for Process Control and Robotics (IPR) at Universitaet Karlsruhe (TH) an experimental hardware setup for the Intraoperative Presentation of image data causes hard accuracy challenges. Main steps in the technical area are calibration, tracking and registration. We present our solutions for these machine vision related tasks. Afterwards we describe the way data is supplied and prepared for superimposition, and we also describe the presentation process. At the end of the paper clinical evaluation and future work will be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To deal with the correspondence problem in stereo imaging, a new approach is presented to find the disparity information on a newly defined dissimilarity map (DSMP). Base don an image formation model of stereo images and some statistical observations, two constraints and four assumptions are adopted. In addition, a few heuristic criteria are developed to define a unique solution. All these constraints, assumptions and criteria are applied to the DSMP to find the correspondence. At first, the Epipolar Constraint, the Valid Pairing Constraint and the Lambertian Surface Assumption are applied to DSMP to locate the Low Dissimilarity Zones (LDZs). Then, the Opaque Assumption and the Minimum Occlusion Assumption are applied to LDZs to obtain the admissible LDZ sets. Finally, the Depth Smoothness Assumption and some other criteria are applied to the admissible LDZ sets to produce the final answer. The focus of this paper is to find the constraints and assumptions in the stereo correspondence problem and then properly convert these constraints and assumptions into executable procedures on the DSMP. In addition to its ability in estimating occlusion accurately, this approach works well even when the commonly used monotonic ordering assumption is violated. The simulation results show that occlusions can be properly handled and the disparity map can be calculated with a fairly high degree of accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stereoscopic disparity plays an important role in the processing and compression of 3D imagery. For example, dense disparity fields are used to reconstruct intermediate images. Although for small camera baselines dense disparity can be reliably estimated using gradient-based methods, this is not the case for large baselines due to the violation of underlying assumptions. Block matching algorithms work better but they are likely to get trapped in a local minimum due to the increased search space. An appropriate method to estimate large disparities is by using feature points. However, since feature points are unique, they are also sparse. In this paper, we propose a disparity estimation method that combines the reliability of feature-based correspondence methods with the resolution of dense approaches. In the first step we find feature points in the left and right images using Harris operator. In the second step, we select those feature points that allow one-to-one left-right correspondence based on a cross-correlation measure. In the third step, we use the computed correspondence points to control the computation of dense disparity via regularized block matching that minimizes matching and disparity smoothness errors. The approach has been tested on several large-baseline stereo pairs with encouraging initial results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new technique for generating multiview video form a two-view video sequence. For each stereo frame in the two-view video sequence, our system first estimates the corresponding point of each pixel by template matching, and then constructs the disparity maps required for view interpolation. To generate accurate disparity maps, we use adaptive-template matching, where the template size depends on local variation of image intensity and the knowledge of object boundary. Then, both the disparity maps and the original stereo videos are compressed to reduce the storage size and the transfer time. Based on the disparity, our system can generate, in real time, a stereo video of desired perspective view by interpolation or extrapolation from the original views, in response to the head movement of the user. Compared to the traditional method of capturing multiple perspective video directly, the approach of view interpolation can eliminate the problems caused by the requirement of synchronizing multiple video inputs and the large amount of video data needed to be stored and transferred.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A stereo system designed and used for the measurement of 3D- coordinates within metric stereo image pairs will be presented. First, the motivation for the development is shown, allowing to evaluate stereo images. As the use and availability of metric images of digital type rapidly increases corresponding equipment for the measuring process is needed. Systems which have been developed up to now are either very special ones, founded on high end graphics workstations with an according pricing or simple ones with restricted measuring functionality. A new conception will be shown, avoiding special high end graphics hardware but providing the measuring functionality required. The presented stereo system is based on PC-hardware equipped with a graphic board and uses an object oriented programming technique. The specific needs of a measuring system are shown and the corresponding requirements which have to be met by the system. The key role of OpenGL is described, which supplies some elementary graphic functions, being directly supported by graphic boards and thus provides the performance needed. Further important aspects as modularity and hardware independence and their value for the solution are shown. Finally some sample functions concerned with image display and handling are presented in more detail.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the past decade, the number of electronic display technologies available to consumers has risen dramatically, and the capabilities of existing technologies have been expanded. This proliferation of choices provides new opportunities for visual stereo presentation, but also new challenges. The methods of implementing stereo on an electronic display, optimized for the original capabilities of the original displays, may no longer be the best choices. Features such as response time, frame rate, aspect ratio, sync timing, pixel registration, and temporal modulation of grayscale and color can strongly influence the process of selecting an optimum presentation format for a given display technology. Display performance issues such as brightness, contrast, flicker, image distortion, defective pixels, and mura are more critical in 3D imagery than in 2D. Susceptibility to burn-in limits the implementation choices for a display that is to be used for both 3D and 2D applications. Resolution and frame rate establish the overall capability for representing depth, and also establish the performance requirements depth, and also establish the performance requirements for the engine providing the 3D material. This paper surveys the capabilities and characteristics of traditional displays such as CRT and LCD panel, and a broad assortment of newer display technologies, including color plasma, field emission, micromirror and other reflective systems, and the general classes of microdisplays. Relevance of display characteristics to various stereo presentation formats is discussed, with description of laboratory experimentation to provide hard numbers. Recommendations are made regarding the stereo formats to be used with various display technologies, and the display technologies to be used with various stereo formats.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A diffuser with large horizontal and vertical diffusion angle is a key component for the screen of a projection display. On the other hand, an autostereoscopic projection display needs a small horizontally diffused and large vertically diffused diffuser. A commercial LSD composed of cylindrical-based grains can be used to meet this requirement. Nevertheless, as the diffused intensity profile of a LSD is Gaussian-like, a LSD in many cases can cause either ghost imaging or black stripes while applied to the screen of autostereoscopic displays. Lenticular plates can produce a near flat-top diffusing profile when the divergence angle is small. For example, a lenticular plate with a 5 degrees divergence can be shown in theory to have only a 2.1 percent non-uniformity. However, as the curvature of the lenticular is very small, it becomes very difficult to fabricate such a lenticular plate. This paper reports a novel approach to modify a lenticular plate that was originally with a large divergence angle to a plate with a desired smaller divergence angle, without destroying the surface quality and maintaining good uniformity. The lenticular plate fabricated by using this approach was used as a screen diffuser of a projection-based autostereoscopic display, and the luminance of the screen viewed from different angles was measured.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An analysis of the basic approaches to flat panel autostereoscopic 3D display is presented, together with a discussion of the application of LCDs in this field. We show that of particular importance in the design of parallax barrier type displays is the diffractive performance of the barriers. A near field diffraction model is used to analyze the detailed illumination structure of the output and can be used to assess viewing freedom and cross talk considerations. A comparison between front and rear parallax barrier displays is given, and compared with experimental result. Recent progress in the design of low cost flat panel 3D displays including a novel viewer position indicator and 2D/3D reconfigurable systems using novel patterned retarder elements are described. We describe the performance and manufacturing considerations for these elements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A miniature electrically addressed spatial light modulator (EASLM) using standard CMOS processing comprises a crystalline silicon transistor array underlying a layer of ferroelectric liquid crystals and is intended for use in optical information processing as a high-frame rate input device. But the resolution is insufficient for modern video display. We have investigated a display system that tile the image from the EASLM on a pixilated optically addressed spatial light modulator (OASLM) using a binary phase hologram. This system consists of a ferroelectric liquid crystal EASLM with 320 X 240 pixels, a high frame rate video signal controller, a 532 nm laser as a light source of video projector, a binary phase hologram for 4 X 4 image multiplying and a 4 X 4 pixilated OASLM, with the optics for projecting video images. The threshold sensitivity of the OASLM is about 10 (mu) W/cm2 and its spatial resolution is about 50 lp/mm. The binary phase hologram is designed to fan out the asymmetric project image into 3 X 4 in the ratio of horizontal and vertical size for being memorized on the one part of the pixilated OASLM. The experimental value of the diffractive efficiency of the hologram is quite similar to the theoretical value, but the zeroth of diffractive beam is not removed completely. The displayed video image has a very high-resolution of 1280 X 960 pixels or a 3D display of 4 X 4 multiviews, depending on the images of the video projector.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe the development and construction of a large screen version of the Cambridge time-multiplexed autostereo display. The new device uses a 50 inch diagonal spherical mirror in place of the 10 inch Fresnel lens of the original Cambridge color display. A fivefold increase in image luminance has been achieved by the replacement of sequential color on a single CRT with separate red, green, and blue CRTs. Fifteen views are displayed at 640 X 480 (VGA) resolution with about 250 cd/m2 luminance and 30 Hz interlaced refresh rate. A 22 mm interview separation provides three views between a typical viewer's eyes, giving a smooth stereoscopic effect over a 330 mm wide eye box. Two identical optical systems have been built, allowing simultaneous use of the device by two viewers. The two system are off-axis with respect to the main mirror, requiring geometric compensation on the CRTs in addition to the normal color convergence. The prototype produces two independent full color, large 3D images which can be viewed under normal lighting conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on joining viewing zones of two 8-view TV projection optics spatially without overlapping, a 16-view 3D imaging system is designed and its performances are demonstrated. Each 8-view TV projection optics projects different view images time sequentially and its operation is synchronized with the other. The system is performed well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For tracking based autostereoscopic displays an eye position detection system is needed. We built a real-time image processing unit that not only achieves a high level of detection reliability but also is cost effective and can be integrated in a display housing. The device is able to autonomously find the eyes of the user and track them without the need of any markers or special lighting. The system consists of a stereo camera and proprietary DSP-based hardware and software. We describe determination of parameters of the optical system and correction of image distortion as well as algorithmic details. Performance results and future developments are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An autostereoscopic display for telepresence and tele- operation applications has been developed at the University of Strathclyde in Glasgow, Scotland. The research is a collaborative effort between the Imaging Group and the Transparent Telepresence Research Group, both based at Strathclyde. A key component of the display is the directional screen; a 1.2-m diameter Stretchable Membrane Mirror is currently used. This patented technology enables large diameter, small f No., mirrors to be produced at a fraction of the cost of conventional optics. Another key element of the present system is an anthropomorphic and anthropometric stereo camera sensor platform. Thus, in addition to mirror development, research areas include sensor platform design focused on sight, hearing, research areas include sensor platform design focused on sight, hearing, and smell, telecommunications, display systems for all visual, aural and other senses, tele-operation, and augmented reality. The sensor platform is located at the remote site and transmits live video to the home location. Applications for this technology are as diverse as they are numerous, ranging from bomb disposal and other hazardous environment applications to tele-conferencing, sales, education and entertainment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a volume hologram-based autostereoscopic 3D display system. In order to synthesize a multiplexed striped image (MSI), this system sues the grating pattern of a volume hologram. Thus, unlike in the digital based system, the synthesis of the MSI in this system ca be made in real- time. We analytically describe this procedure that consists of two steps, recording of grating pattern and illumination of object wave, and present some experimental results for a two-view display system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional stereoscopic displays yield conflicts between convergence and accommodation of the human eyes. It is said that, if the conflicts are too severe, they may cause visual stress to the human. We propose a new 3D display with great potential to reduce such conflicts and visual stress. It presents 3D images by reconstructing points in 3D space as the intersection points of multiple thin light beams. The intersection points are perceived as light sources in 3D space by observers. Multiple beams striking the observer's eye generate 3D images that are more natural for the observer than conventional stereoscopic images. A prototype device was manufactured comprising of a 2D LED matrix, a collimator lens and a spatial light modulator that generates small apertures so as to convert thick light beams into thin light beams. By controlling the distribution and intensity of light sources in synchronization with the position of the aperture, a number intersection points of light beams in space can be formed. With this prototype, the accommodation of the eyes of observers was investigated while they were viewing reconstructed 3D images. As a result, we found that, when the observers viewed a reconstructed 3D image with both eyes, their eyes' accommodation and convergence correspond to depth information. This is an effect which conventional stereoscopic display cannot realize.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a 3D video displaying technique that multiple viewers can observe 3D images from 360 degrees of arc horizontally without 3D glasses. This technique uses a cylindrical parallax barrier and 1D light source array. We have developed an experimental display using this technique and have demonstrated 3D images observable form 360 degrees of arc horizontally without 3D glasses. Since this technique is based on the parallax panoramagram, the parallax number and resolution are limited by the diffraction at the parallax barrier. To avoid these limits, we improved the technique by revolving the parallax barrier. We have been developing a new experimental display using this improved technique. The display is capable of displaying cylindrical 3D video images within the diameter of 100 mm and the height of 128 mm. Images are described with the resolution of 1254 pixels circularly and 128 pixels vertically, and refreshed at 30Hz. Each pixel has the viewing angle of 60 degrees and that is divided into 70 views, therefore the angular parallax interval of each pixel is less than 1 degree. In such a case, observers may barely perceive parallax discretely. The pixels are arranged on a cylinder surface, therefore produced 3D images can be observed from all directions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is reported that the efficiency of a teleoperation in stereoscopic images of the working site is lower than that in the direct viewing of the site. It is assumed that one of the causes of lower efficiency of the teleoperation in the stereoscopic images would be the difficulty of the fusion of image, which would be caused by the imperfect overlapping of image on each eye. Through most of a teleoperation the convergence of the stereoscopic cameras is fixed at a certain point, usually in the middle of the working area. When the plane of the operator's eye-fixation-point is apart from the plane of the convergence point of the stereoscopic cameras, the two images do not overlap perfectly. It requires a great deal of effort for the images to be fused when the difference of the depth of two places is over a certain value. We hypothesized that imperfect overlapping of the images on the left and right eyes would cause a decrease in efficiency for a teleoperation. We examined the efficiency of a teleoperation in tow kinds of camera convergence conditions in a virtual reality (VR) environment. (Condition 1): The convergence point of the cameras follows the point on the target object on which subjects fixate with their both eyes, the ratio of the overlapped area of two images is always nearly at maximum. (Condition 2): The convergence point of both cameras is not set on the target object, but at the center of the hole- base. The large the difference between the plane of the camera's convergence point and the plane of the subject's fixation point becomes, the more the rate of the overlapped are of the two images decreases. We prepared four cylinders and ahole-base with four holes in which the cylinders were inserted. The subject was asked to insert a cylinder in a hole using a 3D mouse which allows free movement in the VR space. We measured the completion times of the operation and the number of errors in each condition to evaluate the efficiency of the operation. As a result of the experiment, the completion times of the operation and the number of errors under nearly-perfectly overlapped conditions were significantly smaller than the ones under the conditions in which the overlapping is less than maximum value. The experiment revealed work performance decreased when the ratio of overlapping of two images, which were projected on each eye, is less than maximum value. These result led to the conclusion that in order to achieve good performance in a teleoperation, the convergence point of the cameras should follow the target object on which subjects fixate with their both eyes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A visual telepresence system has been developed at the University of Reading which utilizes eye tracing to adjust the horizontal orientation of the cameras and display system according to the convergence state of the operator's eyes. Slaving the cameras to the operator's direction of gaze enables the object of interest to be centered on the displays. The advantage of this is that the camera field of view may be decreased to maximize the achievable depth resolution. An active camera system requires an active display system if appropriate binocular cues are to be preserved. For some applications, which critically depend upon the veridical perception of the object's location and dimensions, it is imperative that the contribution of binocular cues to these judgements be ascertained because they are directly influenced by camera and display geometry. Using the active telepresence system, we investigated the contribution of ocular convergence information to judgements of size, distance and shape. Participants performed an open- loop reach and grasp of the virtual object under reduced cue conditions where the orientation of the cameras and the displays were either matched or unmatched. Inappropriate convergence information produced weak perceptual distortions and caused problems in fusing the images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The control of inter-camera distance (ICD) can be used to change the range of binocular disparities available from a visual scene viewed remotely. Binocular disparity is considered pre-eminent in the control of reaching behavior. One reason for this is that once suitably scaled it can specify metrical depth relationships within a scene. Such information is necessary in order to plan the transport and grasped phase of a reaching movement. However whether an observer can take advantage of enhanced disparities to control reaching is unknown. Here we examine the effects of manipulating ICD on reaching movements with ICDs ranging from 6.5cm to 26cm. Typically sized, real world objects were placed in a scene and reaching performance was assessed. An experimental sequence consisted of three blocks. The first and last block used a normal ICD/IOD of 6.5cm whereas the middle block used an increased ICD. Larger than normal ICD were found to disrupt reaching performance, with slower peak velocities and smaller grip apertures being observed. This was more pronounced for unfamiliar objects. Little evidence for learning was found.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
StereoGraphics Corporation introduced the push-pull ZScreen using (pi) -cell technology for direct viewing of stereoscopic images on monitors in 1987. A version of the push-pull product continues to be manufactured for use in conjunction with high-end CRT based projectors. In 1998, we reintroduced a (pi) -cell modulator, of a different design, which is intended for use with a CRT based monitor image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In projection-based virtual reality systems, such as the CAVE, users can observe immersive stereoscopic images. To date, most of the images, projected onto the screens, are synthesized form polygonal models, which represent the virtual world, This is because the resolution and the viewing angle of a real time are not enough for such large screen systems. In this paper, the authors propose a novel approach to avoid the problem by exploiting the human visual systems. In the proposed system, the resolution of the center of view is very high, while that of the rest is not so high. The authors constructed a four-camera system, in which the pairs of NTSC cameras are prepared for both left and right eyes. Four video streams are combined into one video stream and captured by a graphics computer. Wide-angle multi-resolution images are synthesized in real-time from the combined video stream. Thus, we can observe the wide- angle stereoscopic video, while the resolution of the center of view is high enough. Moreover, this paper proposed another configuration of the four-camera system. Experimental results show that we can observe three levels of viewing angle and resolution by the stereoscopic effects, while images for each eye has just two levels. The discontinuities in the multi-resolution images are effectively suppressed by this new lens configuration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a production of stereoscopic 3D movies of a Spanish monastery for a digital archive. The authors have previously produced and presented an experimental virtual museum of Japanese Buddhist art in 1995. The purpose of this study was to produce a parallel with it, and to examine a simple method for the production.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe here advances in the development of the StereoJet process, which provides stereoscopic hardcopy comprising paired back-to-back digital images produced by inkjet printing. The polarizing images are viewed with conventional 3D glasses. Image quality has benefitted greatly from advances in inkjet printing technology. Recent innovations include simplified antighosting procedures; precision pin registration; and production of large format display images. Applications include stills from stereoscopic motion pictures, molecule modeling, stereo microscopy, medical imaging, CAD imaging, computer-generated art, and pictorial stereo photography. Accelerated aging test indicate longevity of StereoJet images in the range 35- 100 years. The commercial introduction of custom StereoJet through licensed service bureaus was initiated in 1999.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The most popular, and cheapest, teleconferencing systems are those based upon a PC and use the Internet as the data communications medium. Advanced system may use specially constructed and designed rooms, large screen projection systems and communications networks. While such systems are undoubtedly useful, they are not realistic enough to enable the participants to believe that the person they are talking to may actually be in their presence. There are a number of physiological factors that are lacking in such teleconferencing systems which detract from the realism of the experience. These factors include: (1) Low resolution images. (2) Images not to scale. (3) Transmission delays discernable in the images being viewed. (4) Images are in 2D and therefore not perceived as being life like. (5) Images are seen from a single perspective and do not alter as the viewers head is moved. (6) It is not possible for the participants to obtain eye contact. While factors 1, 2 and 3 can be addressed with careful construction and system design, factors 4, 5 and 6 are more difficult to overcome. An autostereoscopic teleconferencing system will be described that overcomes all the factors addressed above and provides a highly realistic viewing experience for the teleconferencing participants.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-media Ambiance Project of TAO has been researching and developing an image space, that can be shared by people in different locations and can lend a real sense of presence. The image space is mainly based on photo-realistic texture, and some deformations which depend on human vision characteristics or pictorial expressions are being applied. We aim to accomplish shared-space communication by an immersive environment consisting of the image space stereoscopically projected on an arched screen. We refer to this scheme as 'ambiance communication.' The first half of this paper presents global descriptions on basic concepts of the project, the display system and the 3-camera image capturing system. And the latter half focuses on two methods to create a photo-realistic image space using the captured images of a natural environment. One is the divided expression of the long-range view and ground, which not only gives more realistic setting of the ground but commands more natural view when synthesized with other objects and gives potentialities of deformations for some purposes. The other is the high quality panorama generation based on even-odd field integration and image enhancement by a two dimensional quadratic Volterra filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Single lens stereoscopy is the method where a stereo pair is derived by sampling light from two sides of an aperture within a single optical path. I first review the formation of the two images and the characteristics of these images. Of special interest are the differences between single-lens stereoscopy and dual-lens stereoscopy. Then I consider various practical considerations of using this method to make real-world products.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A micro-retardation array is a plate consisting of two or more optical retardation states that are micro-patterned within different regions of the plate. A LCD panel with the micro-retardation array can be used to display stereoscopic images watched with or without special glasses by encoding right-eye image and left-eye image with periodically horizontal stripes of different polarization states. For example, the odd rows of stripes are assigned to have zero retardation and the even rows of stripes with a half- wavelength retardation or vice versa. The width of each stripe is of the order of hundreds of microns. This paper describes a fabrication process of micro-retardation array with high contrast ratio, well-defined stripe boundary and green process. The fact that the retardation property of polymer film will be changed by heat process is used in this fabrication process. It is shown that by accurately controlling the power and spot-size of a CO2 laser, the retardation property of a polymeric film, such as PC and ARTON can be tailored within a localized area without altering the retardation of the untreated areas. In addition, the contrast ratios of micro-retardation array are measured and analyzed, the performance of an autostereoscopic display system using the micro-retardation array is also described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report three experiments that explore the effect of enhanced binocular information on a range of perceptual judgements made under telepresence. Enhanced disparity is potentially useful as it would extend the range over which disparities are detectable, but it is not known whether, or for what tasks, we can use the enhanced information. Subjects positioned a 'mobile' within a scene viewed, via remote cameras, on a monitor. The tasks differed in the minimum geometry required to perform them, and we compared performance under monocular, normal binocular, and enhanced binocular conditions. Enhanced disparity improved performance on a 'nulling' task ; had no effect on a distance matching task; or on a shape task. We conclude that enhanced disparity is potentially useful for limited specialist tasks, but is unlikely to be useful in general. It remains possible that with training its usefulness could be extended.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With regard to the intuitive monitoring system, it is important to support the smooth switching between a broad observation mode and a detail observation mode. We propose, in this paper, the 'Scope Cache', a device for elimination of the time lag between camera operation and indicated images, and for presenting with images without any critical delay as if the eyes and head motion of human being do so. This method is realized by caching the wider range of image sent from the remote-controlled movable camera with gaining a view by rapidly cutting out a certain portion of image from the entire image. Some experiments and examination of the system have been performed to show the effectiveness of the proposed method. The proposed method, which estimates the camera position considering the object in the picture from the camera, can be applied for any type of the fixed position pan-tilt camera.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Teleconferencing systems are becoming more popular because of advance in image processing and broadband network. Nevertheless, communicating with someone at a remote location through a teleconferencing system still presents problems because of the difficulty of establishing and maintaining eye contact. Eye contact is essential to having a natural dialog. The purpose of our study is to make eye contact possible during dialog by using image processing with no particular devices, such as color markers, sensors, which are equipped users with and IR cameras. Proposed teleconferencing system is composed of a computer, a display attached to the computer, and four cameras. We define virtual camera as the camera, which exists virtually in 3D space. By using the proposed method, we can acquire a front view of a person that is taken with the virtual camera. The image taken with virtual camera is generated by extracting a same feature point among four face images. Feature point sets among four face images are automatically corresponded by using Epipolar Plane Images (EPIs). The users can establish eye contact by acquiring the front face view, and moreover, they also can obtain various views of the image because 3D points of the object can be extracted from EPIs. Through these facilities, the proposed system will provide users with better communication than previous systems. In this paper, we describe the concept, implementation and the evaluation are described in various perspective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Depth and distance judgements were compared under monocular, motion parallax and binocular viewing conditions using a telepresence system. Participants viewed the virtual objects via a modified Wheatstone stereoscope. A camera pair relayed images of real objects to the LCD displays within the stereoscope. The entire viewing apparatus was mounted on a linear stage thus allowing parallax movement to be driven by lateral head motion of the observer. In the monocular and motion parallax conditions, the same image was presented to both eyes and convergence was set to approximately mid- target distance. In the binocular condition, the cameras and displays were configured to preserve the appropriate convergence and disparity information. The participants' task was to reach and 'grasp' the object seen within the stereoscope. Reach distance and grasp aperture were recorded via a magnetic tracking device. Judgements were most accurate when stereo information was available. Surprisingly, motion parallax information did not seem to improve performance over that observed in the monocular condition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we discuss issues involved in creating art and cultural heritage projects in Virtual Reality with particular reference to one interactive narrative, 'The Thing Growing'. In the first section we will briefly discuss the potential of VR as a medium for the production of art and the interpretation of culture. In the second section we describe 'The Thing Growing' project. In the third section we discuss building an interactive narrative in VR using XP, an authoring system we designed to simplify the process of producing projects in VR. In the fourth section we will discus some issues involved in presenting art and cultural heritage projects in VR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The RAGE system is a collaborative virtual environment which is being developed at the Naval Research Laboratory specifically for planning, training and situation awareness. IT consists of the CAVE-like device known as the GROTTO and a virtual workbench. The system offers many important capabilities for hostage rescue scenarios. First, it avoids the need to create physical mockups of rooms and buildings, instead computer models are generated. Second, it is possible to explore alternative scenarios to experiment with different tactical operations. Third, the system can be linked with other software modules such as simulators and analysis tools, and has the capability to dynamically change the environment as new intelligence information is received. This system has three potential applications. First, it can be used as a device for the training of Special Forces. Second, it can be used to study important military operations. Finally, it can be used as a command and control device used in real-time as the rescue operation is being carried out. We show some these capabilities by reproducing one famous special forces hostage rescue operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents three methods for classifying and qualifying virtual and immersive environments. The first is to plot modes of use against environment types. The second is to create a matrix analyzing display type against interaction methodology. The third is to analyze the system as if it created a volume of 3D pixels and determine if the quality of the created pixel volume is appropriate for the given application and use.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
VR is possible which brings users to the reality by computer and VE is a simulated world which takes users to any points and directions of the object. VR and VE can be very useful if accurate and precise data are sued, and allows users to work with realistic model. Photogrammetry is a technique which is able to collect and provide accurate and precise data for building 3D model in a computer. Data can be collected from various sensor and cameras, and methods of data collector are vary based on the method of image acquiring. Indeed VR includes real-time graphics, 3D model, and display and it has application in the entertainment industry, flight simulators, industrial design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we describe a new adaptive streaming protocol for mobile augmented reality applications. Mobile augmented reality is typically a multi-user environment, where users interact with each other using shared computer-generated virtual objects. One application area of mobile augmented reality is concurrent design and engineering, where the consistency of shared objects is maintained with a real-time streaming protocol. The protocol adapts the stream of update messages according to the wireless link quality. Adaptation means the adjusting of transmission speed to maximize reliability and to minimize packet delivery latencies and their variation, jitter. Conventional streaming protocols use an end-to-end feedback channel to transmit network quality parameters used in adaptation. However, this causes unnecessary delays and decrease the reliability, because feedback messages have to be transmitted over a wireless media. We present a new approach, in which the adaptation is based on the link quality parameters, stored in a base station. No high protocol layer wireless feedback channel is required, which makes the adaptation fast. We justify the feasibility of our protocol by comparing its performance with the most common transport-layer streaming protocols and show that faster adaptation reduces the number of lost packets and jitter in the network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The described virtual clay modeling project explores the use of virtual environments (VES) for the simulation of two- handed clay modeling and sculpting tasks. Traditional clay modeling concepts are implemented and enhanced with new digital design tools leveraging from virtual reality (VR) and new input device technology. In particular, the creation of an intuitive and natural work environment for comfortable and unconstrained modeling is emphasized. VR projection devices, such as the Immersive WorkBench, shutter glasses, and pinch gloves, equipped with six-degrees-of-freedom trackers, are used to apply various virtual cutting tools to a volumetric data structure . The employment of an octree as underlying data structure for volume representation and manipulation in immersive environments allows real-time modeling of solids utilizing a suite of either geometrically or mathematically defined cutting and modeling tools.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses the software engineering of a class library for supporting haptic rendering of interaction constraints within a hand-immersive virtual environment. The design of interaction and navigation paradigms is a significant issue in the usability of virtual environments. The careful application of constraints in the interaction can help the user focus on their specific task. Interaction constraints can be usefully implemented using a haptic, or force-feedback, device. Haptic programming is difficult, so we are designing and implementing a class library to provide reusable components for programming haptic constraints. The library extends the Magma multi-sensory scenegraph API, providing a constrained proxy to serve as a new interaction point for the application, and an abstract constraint definition that can be realized by a variety of constraint types. The paper illustrates the constraint definition by describing a number of geometric constraints, and also describes techniques for combining and modifying constraints to create new ones. These techniques are used to construct constraints tailored to specific application requirements. The haptic constraints library is still a work in progress, and we have identified a number of areas where improvements can be made. One of the major challenges is to provide software components that can be reused to support a broad selection of different approaches to programming interaction in a haptically enabled virtual environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the efforts being carried out at the NRL towards VR Scientific Visualization. We are exploring scientific visualization in an immersive virtual environment: the NRL's CAVE-like device known as the GROTTO. Our main effort has been towards the development of software that eases the transition between desktop visualization and VR visualization. It has been our intention to develop visualization tools that can be applied in a wide range of scientific areas without spending excessive time in software development. The advantages of such software are clear. The scientists do not have to be expert programmers, nor need they make a large investment of time to visualize scientific information in a Virtual Environment. As a result of this effort, we were able to port a considerable number of applications to the GROTTO in a short period of time. These projects cover a wide range of scientific areas and include chemistry, fluid dynamics, space physics and materials sciences. We describe the major technical hurdles we have addressed for interactive visualization of real data sets for real users. Finally we comment on the advantages that immersive systems like the GROTTO offer to the scientific community.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper looks at the process for establishing appropriate industrial scenarios for collocated hapto-visual virtual environment systems, and at the visualization and interaction systems and display metaphors to support those applications. Collocated hapto-visual virtual environment systems combine quality 3D graphical environments with the interactive realism provided by haptic devices to provide an excellent platform for implementing scenarios. The 3D graphics gives a mechanism for accurately presenting spatial and attribute information. The haptic device provides a realistic real-time interaction tool that enables the operator to do ongoing useful work. Case studies from the mining and petroleum industries are used as illustrations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Collocated hapto-visual environments are becoming a commercial reality in desktop-scale visual environments, where the haptic and graphic work volumes approximate that of a conventional desktop monitor. Meanwhile, state of the art graphical virtual environments focus on medium-scale workbenches and large-scale 'walls'. These environments pose challenges for collocation of haptics and graphics due to their size and orientation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces the Designers Workbench, a semi- immersive virtual environment for two-handed modeling, sculpting and analysis tasks. The paper outlines the fundamental tools, design metaphors and hardware components required for an intuitive real-time modeling system. As companies focus on streamlining productivity to cope with global competition, the migration to computer-aided design (CAD), computer-aided manufacturing, and computer-aided engineering systems has established a new backbone of modern industrial product development. However, traditionally a product design frequently originates form a clay model that, after digitization, forms the basis for the numerical description of CAD primitives. The Designers Workbench aims at closing this technology or 'digital gap' experienced by design and CAD engineers by transforming the classical design paradigm into its fully integrate digital and virtual analog allowing collaborative development in a semi- immersive virtual environment. This project emphasizes two key components form the classical product design cycle: freeform modeling and analysis. In the freedom modeling stage, content creation in the form of two-handed sculpting of arbitrary objects using polygonal, volumetric or mathematically defined primitives is emphasized, whereas the analysis component provides the tools required for pre- and post-processing steps for finite element analysis tasks applied to the created models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper gives an overview of wireless input system and corresponding interaction techniques. We investigate the use of wireless input systems for immersive interaction in a large scale virtual environment displayed on a 6.4m X 2m stereoscopic projection system. The system is used to present a compete car body at full scale, allowing users to walk up to 6m in front of the virtual object. The working volume needed for immersive interaction in this scenario is much larger than that typically realized by a HMD or CAVE making the use of cable bound devices problematic. Interactions realized in this environment include: Accurate head tracing to allow high quality undistorted stereoscopic rendering at natural scale. Head tracking for navigation as intuitive interaction by walking around. Positioning the car on the ground. Menu selection for different preselected models, assemblies or environments. Control of the virtual lighting situation. For head tracking a cluster of commercially available optical trackers with a single passive reflective target is used. For manual interaction wireless button devices with inertial sensor for acceleration and spin have been built. The application of these devices and the combination of input channels in different interactions is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Projection-based immersive displays are rapidly becoming the visualization system of choice for applications requiring the comprehension of complex datasets and the collaborative sharing of insights. The wide variety of display configurations can be grouped into five categories: benches, flat-screen walls, curved-screen theaters, concave-screen domes and spatially-immersive rooms. Each have their strengths and weaknesses with the appropriateness of each dependent on one's application and budget. The paper outlines the components common to all projection-based displays and describes the characteristics of each particular category. Key image metrics, implementation considerations and immersive display trends are also considered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.