This paper presents a new method for video segmentation using deep learning neural networks in the quaternion space into sets of objects, background, static and dynamic textures. We introduce a novel quaternionic anisotropic gradient (QAG) which can combine the color channels and the orientations in the image plane. The local polynomial estimates and the ICI rule are used for QAG calculation. Since for segmentation tasks, the image is usually converted to grayscale, this leads to the loss of important information about color, saturation, and other important information associated color. To solve this problem, we use the quaternion framework to represent a color image to consider all three channels simultaneously when segmenting the RGB image. Using the QAGs, we extract the local orientation information in the color images. Second, to improve the segmentation result we applied neural network to this derived orientation information. The presented new approach allows obtaining clearer and more detailed boundaries of objects of interest. Experimental comparisons to state-of-the-art video segmentation methods demonstrate the effectiveness of the proposed approach.
|