Presentation + Paper
1 August 2021 Quality-aware CNN-based in-loop filter for video coding
Author Affiliations +
Abstract
The state‐of‐the‐art video coding standard, Versatile Video Coding (VVC) or H.266, has demonstrated its superior coding efficiency over its predecessor HEVC/H.265. In this paper, a novel in‐loop filter based on convolutional neural network (CNN) is illustrated to further improve the coding efficiency over VVC. In this filter, one single NN model is used to process multiple video components simultaneously. In addition, with a quality map generated for each video component as network input, the same single NN model is capable of processing videos in different qualities and resolutions while maintaining coding efficiency, which reduces the overall network complexity significantly. Simulation results show that the proposed approach provides average BD‐rate savings of 6.27%, 18.78% and 20.42% under AI configuration, and average BD-rate savings of 5.18%, 21.95% and 22.13% under RA configuration, respectively for Y, Cb and Cr components.
Conference Presentation
© (2021) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Wei Chen, Xiaoyu Xiu, Xianglin Wang, Yi-Wen Chen, Hong-Jheng Jhu, and Che-Wei Kuo "Quality-aware CNN-based in-loop filter for video coding", Proc. SPIE 11842, Applications of Digital Image Processing XLIV, 1184203 (1 August 2021); https://doi.org/10.1117/12.2593380
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Video coding

Video processing

Artificial intelligence

Convolution

Quantization

Convolutional neural networks

RELATED CONTENT


Back to Top