Deep Learning Based Spatial-Temporal In-Loop Filtering for Versatile Video Coding
Abstract
The existing deep learning-based Versatile Video Coding (VVC) in-loop filtering (ILF) enhancement works mainly focus on learning the one-to-one mapping between the re-constructed and the original video frame, ignoring the potential resources at encoder and decoder. This work proposes a deep learning-based Spatial-Temporal In-Loop filtering (STILF) that takes advantage of the coding information to improve VVC in-loop filtering. Each CTU is filtered by VVC default in-loop filtering, self-enhancement Convolutional neural network (CNN) with CU map (SEC), and the reference-based enhancement CNN with the optical flow (REO). Bits indicating ILF mode are encoded under CABAC regular mode. Experimental results show that 3.78%, 6.34%, 6%, and 4.64% BD-rate reductions are obtained under All Intra, Low Delay P, Low Delay B, and Random Access configurations, respectively.
Cite
Text
Pham et al. "Deep Learning Based Spatial-Temporal In-Loop Filtering for Versatile Video Coding." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021. doi:10.1109/CVPRW53098.2021.00206Markdown
[Pham et al. "Deep Learning Based Spatial-Temporal In-Loop Filtering for Versatile Video Coding." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021.](https://mlanthology.org/cvprw/2021/pham2021cvprw-deep/) doi:10.1109/CVPRW53098.2021.00206BibTeX
@inproceedings{pham2021cvprw-deep,
title = {{Deep Learning Based Spatial-Temporal In-Loop Filtering for Versatile Video Coding}},
author = {Pham, Chi Do-Kim and Fu, Chen and Zhou, Jinjia},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2021},
pages = {1861-1865},
doi = {10.1109/CVPRW53098.2021.00206},
url = {https://mlanthology.org/cvprw/2021/pham2021cvprw-deep/}
}