In this paper we present a new model-based feature matching method for an object recognition system. The actual matching takes place on a 2D image space by comparing a projected image of a 3D model with a sensor-extracted image of an actual target. The proposed method can be used with images generated by a wide variety of both camera and radar sensors, but we focus our attention on camera images with some discussions on synthetic aperture radar images. The effectiveness of the method is demonstrated only using point features. An extension to include region features should require some but not major revisions to the main structure of the proposed method. The method contains three phases to complete the target recognition process. The inputs to the method are a model projected image, a sensor-extracted image, an estimated current pose of the sensor with respect to a reference coordinate frame, and the Jacobian function associated with the estimated current sensor pose which relates 3D target features with 2D image features. The first stage uses geometric information of the target model to limit the number of possible corresponding feature sets, the second stage generates a set of possible sensor pose changes by solving a set of optimization problems, and the final stage finds the `best' change of sensor pose out of all possible ones. This change of sensor pose is added to the current sensor pose to form a new sensor location and orientation. The revised pose can then be used to reproject the model features and subsequently compute a compatibility measure between the model-projected and sensor-extracted images: this quantifies the reliability of the desired target recognition. In this paper we describe each of the three steps of the method and provide experimental results to demonstrate its validity.
|