4 January 2023 Method of gaze extraction in bionic vision
Yingni Duan, Guozhu Li, Yanzi Deng
Author Affiliations +
Abstract

Aiming at the low efficiency of information processing in a robot vision system, our work studies how to extract fixed points during tasks so that significant and interesting target areas in the scene can be located. First, the architecture of a gaze extraction model of bionic vision is proposed, which consists of a spatial visual saliency model and a task-driven fixed point extraction model. The task-driven fixed point extraction process is represented as a closed-loop control problem. To couple exploration with the sampling process, a Q learning algorithm based on a random strategy is adopted, which is a top-down task-driven gaze extraction in the time dimension. It is fused with the spatial-based visual saliency model to form a spatiotemporal hybrid gaze extraction model to determine the fixed point in the final image. Finally, our qualitative image visualization experiment indicates that the fixed point extraction results of the model in two consecutive frames are close to the truth value obtained by an eye tracker. The quantitative area under the curve and average angular error data confirms the effectiveness of the model in predicting and extracting fixed points.

© 2023 SPIE and IS&T
Yingni Duan, Guozhu Li, and Yanzi Deng "Method of gaze extraction in bionic vision," Journal of Electronic Imaging 32(6), 062503 (4 January 2023). https://doi.org/10.1117/1.JEI.32.6.062503
Received: 8 October 2022; Accepted: 13 December 2022; Published: 4 January 2023
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Visual process modeling

Visualization

Feature extraction

Data modeling

Eye

Eye models

Eye tracking

RELATED CONTENT

Semantic bifurcated importance field visualization
Proceedings of SPIE (April 27 2007)
Refinement of a model for predicting perceived brightness
Proceedings of SPIE (December 28 2001)
Learned saliency transformations for gaze guidance
Proceedings of SPIE (February 02 2011)

Back to Top