The Neptec Design Group has developed a Laser Camera System (LCS) that can operate as a 3D imaging scanner. The LCS uses an auto-synchronized triangulation scheme to measure range information while two orthogonal scanning mirrors sweep through the field-of-view. The LCS simultaneously records intensity of the reflected laser beam and range information. The intensity data can be used to produce 2D grayscale images as well as to map the intensities onto 3D surface models. The nature of triangulation geometry dictates that such measurements are best for close objects, with range error increasing with the square of object rangel. The LCS was flown in the payload bay of the shuttle Discovery during mission STS-105. Four scans were taken of the same scene while the shuttle was docked to the International Space Station (ISS)2. Partially visible ISS elements included the SSRMS (Canadarm2), Multi-Purpose Logistics Module (MPLM), Destiny Lab Module, Node 1 (Unity), Joint Airlock Module (Quest), and several solar arrays.
Existing phase-shifting measurement methods involve processing of three acquired images or computation of functions that require more complex processing than linear functions. This paper presents a novel two-step triangular-pattern phase-shifting method of 3-D object-shape measurement that combines advantages of earlier techniques. The method requires only two image-acquisition steps to capture two images, and involves projecting linear grayscale-intensity triangular patterns that require simpler computation of the intensity ratio than methods that use sinusoidal patterns. A triangular intensity-ratio distribution is computed from two captured phase-shifted triangular-pattern images. An intensity ratio-to-height conversion algorithm, based on traditional phase-to-height conversion in the sinusoidal-pattern phase-shifting method, is used to reconstruct the object 3-D surface geometry. A smaller pitch of the triangular pattern resulted in higher measurement accuracy; however, an optimal pitch was found, below which intensity-ratio unwrapping failure may occur. Measurement error varied cyclically with depth and may partly be due to projector gamma nonlinearity and image defocus. The use of only two linear triangular patterns in the proposed method has the advantage of less processing than current methods that process three images, or methods that process more complex functions than the intensity ratio. This would be useful for high speed or real-time 3-D object-shape measurement.
KEYWORDS: Target recognition, Automatic target recognition, 3D acquisition, Sensors, Databases, Detection and tracking algorithms, 3D modeling, Atomic force microscopy, 3D image processing, LIDAR
Automatic Target Recognition (ATR) using three-dimensional (3D) sensor data has proven very successful in
experimental platforms. One of the factors limiting the implementation of these approaches is lag in operational
hardware to provide the type of data required. Neptec has addressed this sensor concern in its 3D ATR software. The
need for specific operational 3D sensing hardware is avoided by using a generic range image format, and a shape-from-motion
(SfM) method enables the generation of 3D data using widely available 2D sensors.
The previously reported ATR software has been expanded from proof-of-concept ground-to-ground to include air-to-ground
capabilities. The system uses a generic 3D model of the target, such as from CAD or scanned from a scale or
full-sized model which does not need to be perfect. The rapid recognition approach simultaneously provides target pose
estimation. This capability has been demonstrated using ground-based imaging LiDAR, airborne LiDAR, scannerless
AMcw LiDAR, and shape-from-motion using a 2D camera. Multiple data sets can be fused to optimize confidence in
the recognition and provide measures of similarity between different targets and the data set.
This paper presents an overview of the 3D ATR approach and updates performance characteristics from a variety of
tests that include synthetic data, lab tests, and field tests. It is shown that the approach is fast, highly robust, flexible, and
is primarily limited by the quality of sensor data. Particular emphasis is placed on the shape-from-motion application
since this capability can make use of widely used operational 2D imaging sensor packages.
In phase-shifting-based fringe-projection surface-geometry measurement, phase unwrapping techniques produce a continuous phase distribution that contains the height information of the 3-D object surface. Mapping of the phase distribution to the height of the object has often involved complex derivations of the nonlinear relationship. In this paper, the phase-to-height mapping is formulated using both linear and nonlinear equations, the latter through a simple geometrical derivation. Furthermore, the measurement accuracies of the linear and nonlinear calibrations are compared using measurement simulations where noise is included at the calibration stage only, and where noise is introduced at both the calibration and measurement stages. Measurement accuracies for the linear and nonlinear calibration methods are also compared, based on real-system measurements. From the real-system measurements, the accuracy of the linear calibration was similar to the nonlinear calibration method at the lower range of depth. At the higher range of depth, however, the nonlinear calibration method had considerably higher accuracy. It seems that as the object approaches the projector and camera for the higher range of depth, the assumption of linearity based on small divergence of light from the projector becomes less valid.
Two-step triangular phase-shifting has recently been developed for 3-D surface-shape measurement. Compared with
previous phase-shifting methods, the method involves less processing and fewer images to reconstruct the 3-D object.
This paper presents novel extensions of the two-step triangular phase-shifting method to multiple-step algorithms to
increase measurement accuracy. The phase-shifting algorithms used to generate the intensity ratio, which is essential for
determination of the 3-D coordinates of the measured object, are developed for different multiple-step approaches. The
measurement accuracy is determined for different numbers of additional steps and values of pitch. Compared with the
traditional sinusoidal phase-shifting-based method with same number of phase shifting steps, the processing is expected
to be reduced with similar resolution. More phase steps generate higher accuracy in the 3-D shape reconstruction;
however, the digital fringe projection generates phase shifting error if the pitch of the pattern cannot be evenly divided
by the number of phase steps. The pitch in the projected pattern must therefore be selected appropriately according to the
number of phase-shifting steps used.
Two-step triangular phase-shifting is a recently developed method for 3-D shape measurement. In this method, two
triangular gray-level-coded patterns, which are phase-shifted by half of the pitch, are needed to reconstruct the 3-D
object. The measurement accuracy is limited by gamma non-linearity and defocus of the projector and camera. This
paper presents a repeated phase-offset two-step triangular-pattern phase-shifting method used to decrease the
measurement error caused by the gamma non-linearity and defocus in the previously developed two-step triangularpattern
phase-shifting 3-D object measurement method. Experimental analysis indicated that a sensitivity threshold
based on the gamma non-linearity curve should be used as the minimum intensity of the computer-generated pattern
input to the projector to reduce measurement error. In the repeated phase-offset method, two-step triangular phaseshifting
is repeated with an initial phase offset of one-eighth of the pitch, and the two obtained 3-D object height
distributions are averaged to generate the final 3-D object-height distribution. Experimental results demonstrated that the
repeated phase-offset measurement method substantially decreased measurement error compared to the two-step
triangular phase-shifting method.
KEYWORDS: Sensors, 3D acquisition, Detection and tracking algorithms, 3D modeling, LIDAR, Computer simulations, Imaging systems, Data modeling, Space operations, Liquid crystals
Neptec has developed a vision system for the capture of non-cooperative objects on orbit. This system uses an active TriDAR sensor and a model based tracking algorithm to provide 6 degree of freedom pose information in real-time from mid range to docking. This system was selected for the Hubble Robotic Vehicle De-orbit Module (HRVDM) mission and for a Detailed Test Objective (DTO) mission to fly on the Space Shuttle.
TriDAR (triangulation + LIDAR) technology makes use of a novel approach to 3D sensing by combining triangulation and Time-of-Flight (ToF) active ranging techniques in the same optical path. This approach exploits the complementary nature of these sensing technologies. Real-time tracking of target objects is accomplished using 3D model based tracking algorithms developed at Neptec in partnership with the Canadian Space Agency (CSA). The system provides 6 degrees of freedom pose estimation and incorporates search capabilities to initiate and recover tracking. Pose estimation is performed using an innovative approach that is faster than traditional techniques. This performance allows the algorithms to operate in real-time on the TriDAR's flight certified embedded processor.
This paper presents results from simulation and lab testing demonstrating that the system's performance meets the requirements of a complete tracking system for on-orbit autonomous rendezvous and docking.
NASA contracted Neptec to provide the Laser Camera System (LCS), a 3D scanning laser sensor, for the on-orbit inspection of the Space Shuttle's Thermal Protection System (TPS) on the return-to-flight mission STS-114. The scanner was mounted on the boom extension to the Shuttle Remote Manipulator System (SRMS). Neptec's LCS was selected due to its close-range accuracy, large scanning volume and immunity to the harsh ambient lighting of space.
The crew of STS-114 successfully used the LCS to inspect and measure damage to the Discovery Shuttle TPS in July, 2005. The crew also inspected the external-tank (ET) doors to ensure that they were fully closed. Neptec staff also performed operational support and real-time detailed analysis of the scanned features using analysis workstations at Mission Control Center (MCC) in Houston. This paper provides a summary of the on-orbit scanning activities and a description of the results detailed in the analysis.
In fringe-projection surface-geometry measurement, phase unwrapping techniques produce a continuous phase distribution that contains the height information of the 3-D object surface. To convert the phase distribution to the height of the 3-D object surface, a phase-height conversion algorithm is needed, essentially determined in the system calibration which depends on the system geometry. Both linear and non-linear approaches have been used to determine the mapping relationship between the phase distribution and the height of the object; however, often the latter has involved complex derivations. In this paper, the mapping relationship between the phase and the height of the object surface is formulated using linear mapping, and using non-linear equations developed through simplified geometrical derivation. A comparison is made between the two approaches. For both methods the system calibration is carried out using a least-squares approach and the accuracy of the calibration is determined both by simulation and experiment. The accuracy of measurement using linear calibration data was generally higher than using non-linear calibration data in most of the range of measurement depth.
Traditional sinusoidal phase-shifting algorithms involve the calculation of an arctangent function to obtain the phase, which results in slow measurement speed. This paper presents a novel high-speed two-step triangular phase-shifting approach for 3-D object measurement. In the proposed method, a triangular gray-level-coded pattern is used for the projection. Only two triangular patterns, which are phase-shifted by 180 degrees or half of the pitch, are needed to reconstruct the 3-D object. A triangular-shape intensity-ratio distribution is obtained by calculation of the two captured triangular patterns. Removing the triangular shape of the intensity ratio over each pattern pitch generates a wrapped intensity-ratio distribution. The unwrapped intensity-ratio distribution is obtained by removing the discontinuity of the wrapped image with a modified unwrapping method commonly used in the sinusoidal phase-shifting method. An intensity ratio-to-height conversion algorithm, which is based on the traditional phase-to-height conversion algorithm in the sinusoidal phase-shifting method, is used to reconstruct the 3-D surface coordinates of the object. Compared with the sinusoidal and trapezoidal phase shifting methods, the processing speed is faster with similar resolution. This method therefore has the potential for real-time 3-D object measurement. This has applications in inspection tasks, mobile-robot navigation and 3-D surface modeling.
3D ranging and imaging technology is generally divided into time-based (ladar) and position-based (triangulation) approaches. Traditionally ladar has been applied to long range, low precision applications and triangulation has been used for short range, high precision applications. Measurement speed and precision of both technologies have improved such that ladars are viable at shorter ranges and triangulation is viable at longer ranges. These improvements have produced an overlap of technologies for short to mid-range applications. This paper investigates the two sets of technologies to demonstrate their complementary nature particularly with respect to space and terrestrial applications such as vehicle inspection, navigation, collision avoidance, and rendezvous & docking.
Neptec Design Group has developed a 3D automatic target recognition and pose estimation algorithm technology demonstrator in partnership with Canadian DND. This paper discusses the development of the algorithm to work with real sensor data. The recognition approach uses a combination of two algorithms in a multi-step process. The two algorithms provide uncorrelated metrics and are therefore using different characteristics of the target. This allows the potential target dataset to be reduced before the final selection is made. In a pre-processing phase, the object data is segmented from the surroundings and is re-projected onto an orthogonal grid to make the object shape independent of range. In the second step, a fast recognition algorithm is used to reduce the list of potential targets by removing unlikely cases. Then a more accurate, but slower and more sensitive, algorithm is applied to the remaining cases to provide another recognition metric while simultaneously computing a pose estimation. After passing some self-consistency checks, the metrics from both algorithms are then combined to provide relative probabilities for each database object and a pose estimate. Development of the recognition and pose algorithm relied on processing of real 3D data from civilian and military vehicles. The algorithm evolved to be robust to occlusions and characteristics of real 3D data, including the use of different 3D sensors for generating database and test objects. Robustness also comes from the self-validating abilities and simultaneous pose estimation and recognition, along with the potential for computing error bounds on pose. Performance results are shown for pseudo-synthetic data and preliminary tests with a commercial imaging LIDAR.
Neptec Design Group Ltd. has developed a 3D Automatic Target Recognition (ATR) and pose estimation technology demonstrator in partnership with the Canadian DND. The system prototype was deployed for field testing at Defence Research and Development Canada (DRDC)-Valcartier. This paper discusses the performance of the developed algorithm using 3D scans acquired with an imaging LIDAR. 3D models of civilian and military vehicles were built using scans acquired with a triangulation laser scanner. The models were then used to generate a knowledge base for the recognition algorithm. A commercial imaging LIDAR was used to acquire test scans of the target vehicles with varying range, pose and degree of occlusion. Recognition and pose estimation results are presented for at least 4 different poses of each vehicle at each test range. Results obtained with targets partially occluded by an artificial plane, vegetation and military camouflage netting are also presented. Finally, future operational considerations are discussed.
KEYWORDS: Liquid crystals, 3D image processing, 3D modeling, 3D scanning, Light sources and illumination, Laser scanners, 3D acquisition, Data modeling, Sensors, Target acquisition
The Neptec Design Group has developed a new 3D auto-synchronized laser scanner for space applications, based on a principle from the National Research Council of Canada. In imaging mode, the Laser Camera System (LCS) raster scans objects and computes high-resolution 3D maps of their surface features. In centroid acquisition mode, the LCS determines the position of discrete target points on an object. The LCS was flight-tested on-board the space shuttle Discovery during mission STS-105 in August 2001. When the shuttle was docked on the International Space Station (ISS), the LCS was used to obtain four high-resolution 3D images of several station elements at ranges from 5 m to 40 m. A comparison of images taken during orbital day and night shows that the LCS is immune to the dynamic lighting conditions encountered on orbit. During the mission, the LCS also tracked a series of retro-reflective and Inconel targets affixed to the Multi-Purpose Lab Module (MPLM), when the module was stationary and moving. Analysis shows that the accuracy of the photosolutions derived from LCS centroid data is comparable to that of the Space Vision System (SVS), Neptec's product presently used by NASA for ISS assembly tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.