For robust and safe cross country driving, an autonomous ground vehicle must be able to handle conflicts, which may arise from limitations of perception performance, of the dynamics of the vehicle's active camera head and from the feasibility of locomotion maneuvers. This paper describes the interaction and coordination of image processing, gaze control and behavior decision. The behavior decision module specifies the perception tasks for the image processing experts according to the mission, the capabilities of the vehicle and the knowledge about the external world accumulated up to the present time. Depending on its perception task received, an image processing expert specifies combinations of so-called regions of attention (RoA) for each object in 3D object coordinates. These RoA cover relevant object parts and should be visible with a resolution and in a manner as required by the measurement techniques applied. The gaze control unit analyzes the combinations of RoA of all image processing experts in order to plan, optimize and perform a sequence of smooth pursuits, interrupted by saccades. This dynamic interaction has been demonstrated in different complex and scalable autonomous missions with the UBM test vehicle VAMORS. The mission described in this paper makes the vehicle meet an unexpected ditch of unknown size and position forcing the vehicle to reactive behavior regarding locomotion, gaze control as well as image processing.
For robust and secure behavior in natural environment an autonomous vehicle needs an elaborate vision sensor as main source of information. The vision sensor must be adaptable to the external situation, the mission, the capabilities of the vehicle and the knowledge about the external world accumulated up to the present time. In the EMS-Vision system, this vision sensor consists of four cameras with different focal lengths mounted on a highly dynamic pan-tilt camera head. Image processing, gaze control and behavior decision interact with each other in a closed loop. The image processing experts specify so-called regions of attention (RoAs) for each object in 3D object coordinates. These RoAs should be visible with a resolution as required by the measurement techniques applied. The behavior decision module specifies the relevance of obstacles like road segments, crossings or landmarks in the situation context. The gaze control unit takes all this information in order to plan, optimize and perform a sequence of smooth pursuits, interrupted by saccades. The sequence with the best information gain is performed. The information gain depends on the relevance of objects or object parts, the duration of smooth pursuit maneuvers, the quality of perception and the number of saccades. The functioning of the EMS-Vision system is demonstrated in a complex and scalable autonomous mission with the UBM test vehicle VAMORS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.