To use flip chip interconnection technology for semiconductor packages offers a number of possible advantages to the
user: reduced signal inductance, reduced power/ground inductance, higher signal density, die shrink, and reduced
package footprint. However, manufacturing processes for 'flip chip'-integrated packages need a high precision alignment
between flip chip and matched substrate. Comparing with original visual alignment based on 2D image information, an
advanced die placement inspection system for reliable flip chip interconnections has been firstly proposed by authors [2].
In this paper, the proposed system is reviewed briefly, and system calibration algorithms and information processing
algorithms are described in detail. To verify the system performance, a series of real experiments is performed on flip
chip packages for high performance computing, and its results are discussed in detail.
Nowadays, a number of 3D measurement methods have been developed such as stereo vision, laser structured light and PMP (Phase Measuring Profilometry) method. However, they have its own limitations : 2π ambiguity, correspondence problem, long estimation time. To solve these problems, in our previous researches [9,13], we introduced a novel sensing method adopting stereo vision and PMP technique (stereo PMP algorithm). One other difficult problem is occlusion problem needed to tackle by the stereo PMP algorithm which uses the principle of stereo vision and two cameras. The occlusion problem cannot be solved by using the principle of typical stereo vision, because there is no correspondence point in occlusion area. In our previous research based on stereo PMP algorithm, however, phase information related to the projector's position is additionally used which gives more additional information. By using this additional information, we can solve the occlusion problem effectively. In order to detect occlusion area, we adopt the principle of Dynamic Programming, while to measure the depth the principle of typical PMP algorithm and the geometrical relationship of detected depth area. To verify the efficiency of the proposed method, a series of experimental tests were performed.
Nowadays many parts of shipbuilding process are automated, but the painting process is not, because of the difficulty of
automated on-line painting quality measurement, harsh painting environment and the difficulty of robot navigation.
However, the painting automation is necessary, because it can provide consistent performance of painting film thickness.
Furthermore, autonomous mobile robots are strongly required for flexible painting work. However, the main problem of
autonomous mobile robot's navigation is that there are many obstacles which are not expressed in the CAD data. To
overcome this problem, obstacle detection and recognition are necessary to avoid obstacles and painting work effectively.
Until now many object recognition algorithms have been studied, especially 2D object recognition methods using
intensity image have been widely studied. However, in our case environmental illumination does not exist, so these
methods cannot be used. To overcome this, to use 3D range data must be used, but the problem of using 3D range data is
high computational cost and long estimation time of recognition due to huge data base. In this paper, we propose a 3D
object recognition algorithm based on PCA (Principle Component Analysis) and NN (Neural Network). In the algorithm,
the novelty is that the measured 3D range data is transformed into intensity information, and then adopts the PCA and
NN algorithm for transformed intensity information to reduce the processing time and make the data easy to handle
which are disadvantages of previous researches of 3D object recognition. A set of experimental results are shown to
verify the effectiveness of the proposed algorithm.
Nowadays, many kinds of fringe projection method for 3D depth measurements have been researched such as moire method, optical triangulation method and so on. Generally speaking these methods uses the regular vertical fringe pattern. However, by using regular vertical fringe pattern, the shape measurement results especially for symmetric objects are not accurate, because the vertical fringe pattern cannot present the shape of various objects well. In this paper to solve this problem, we introduce a new sensing methodology based on object-adapted fringe projection method. To make a flexible object-adapted fringe pattern we used the projector with spatial light modulator (SLM). Our algorithm mainly consists of three parts. The first part is to generate an object-adapted fringe pattern by applying the moire technique. The second part is image projection of moire image to the projector plane. The final part is to find an absolute depth value of the object by using the optical triangulation method. To verify the performance of our proposed sensing system we conducted a series of experiments for various simple objects. The result shows the feasibility of successful perception for some objects treated herein.
Nowadays a major research issue of mobile robots is to develop a robust 3D environment sensing for navigation and task execution. To achieve this, a variety of techniques have been developed for the determination of the 3D scene geometric information such as stereo vision, laser structured light, laser range finder and so on. But these methods have many limitations. To overcome these limitations we introduce a new sensing algorithm, which is based on the moire technique and stereo vision. To verify the performance of this sensor system we conducted a series of simulation for various simple environments. The result shows the feasibility of successful perception with several environments.
KEYWORDS: Sensors, Stereo vision systems, 3D vision, Cameras, Mobile robots, Visualization, Environmental sensing, Active sensors, 3D acquisition, Infrared sensors
One of major research issues associated with 3D range acquisition is the creation of sensor systems with various functionalities and small size. A variety of machine vision techniques have been developed for the determination of 3D scene geometric information from 2D images. As one of active sensors, structured lighting method has been widely used because of its robustness on the illumination noise and its extractability of feature information of interest. As one of passive sensors, stereo vision does also due to its simple configuration and easy construction. In this work, we propose a novel visual sensor system for 3D range acquisition, using active technique and passive one simultaneously. The proposed sensor system includes inherently two types of sensors, an active trinocular vision and a passive stereo vision. In the active vision part of this sensor, the structured lighting method using multi-lasers is basically utilized. In its stereo vision part, a general passive stereo is constructed. Since each of them has its own advantages and disadvantages on the measurements of various objects, we propose sensor fusion algorithms for acquiring more reliable range information from them. To see how the proposed sensing system can be applied to real applications, we mount it on a mobile robot, and a series of experimental tests is performed for a variety of configurations of robot and environment. The sensing results are discussed in detail.
A sensor fusion scheme for mobile robot environment recognition that incorporates range data and contour data is proposed. Ultrasonic sensor provides coarse spatial description but guarantees open space (with no obstacle) within sonic cone with relatively high belief. Laser structured light system provides detailed contour description of environment but prone to light noise and is easily affected by surface reflectivity. We present a sensor fusion scheme that can compensate the disadvantages of both sensors. Line models from laser structured light system play a key role in environment description. Overall fusion process is composed of two stages: Noise elimination and belief updates. Dempster-Shafer's evidential reasoning is applied at each stage. Open space estimation from sonar range measurements brings elimination of noisy lines from laser sensor. Comparing actual sonar data to the simulated sonar data enables data of two disparate sensors be fused at the unified feature space. Experiments have been conducted to recognize a naturally cluttered indoor environment partially surrounded by window glasses. Experimental results demonstrate the effectiveness of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.