Paper
7 June 2023 Adaptive spatial and temporal aggregation for table tennis shot recognition
Author Affiliations +
Proceedings Volume 12701, Fifteenth International Conference on Machine Vision (ICMV 2022); 127010H (2023) https://doi.org/10.1117/12.2679426
Event: Fifteenth International Conference on Machine Vision (ICMV 2022), 2022, Rome, Italy
Abstract
Action recognition is one of the challenging video understanding tasks in computer vision. Although there has been extensive research in the task of classifying coarse-grained actions, existing methods are still limited in differentiating actions with low inter-class and high intra-class variation. Particularly, the table tennis sport that involves shots of high inter-class similarity, subtle variations, occlusion, and view-point variations. While a few datasets have been available for event spotting and shot recognition, these benchmarks are mostly recorded in a constrained environment with a clear view/perception of shots executed by players. In this paper, we introduce a Table tennis shots 1.0 dataset consisting of 9000 videos of 6 fine-grained actions collected in an unconstrained manner to analyze the performance of both players. To effectively recognise these different types of table tennis shots, we propose an adaptive spatial and temporal aggregation method that can handle the spatial and temporal interactions concerning the subtle variations among shots and low inter-class variations. Our method consists of three components, namely, (i) feature extraction module, (ii) spatial aggregation network, and (iii) temporal aggregation network. The feature extraction module is a 3D convolutional neural network (3D-CNN) that captures the spatial and temporal characteristics of table tennis shots. In order to capture the interaction among the elements of the extracted 3D-CNN feature maps efficiently, we employ spatial aggregation network to obtain the compact spatial representation. Later, we propose to replace the final global average pooling layer (GAP) with the temporal aggregation network to overcome the loss of motion information due to averaging of temporal features. This temporal aggregation network utilizes the attention mechanism of bidirectional encoder representations from Transformers (BERT) to model the significant temporal interactions among the shots effectively. We demonstrate that our proposed approach improves the performance of existing 3D-CNN methods by ~10% on the Table tennis shots 1.0 dataset.We also show the performance of our approach on other action recognition datasets, namely, UCF-101 and HMDB-51.
© (2023) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Sravani Yenduri, Vishnu Chalavadi, and Krishna Mohan C. "Adaptive spatial and temporal aggregation for table tennis shot recognition", Proc. SPIE 12701, Fifteenth International Conference on Machine Vision (ICMV 2022), 127010H (7 June 2023); https://doi.org/10.1117/12.2679426
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
3D modeling

RGB color model

Video

Action recognition

Transformers

Motion models

Convolution

Back to Top