Learned Low Bitrate Video Compression with Space-Time Super-Resolution

Abstract

This paper presents a learned low bitrate video compression framework that consists of pre-processing, compression and post-processing. In pre-processing stage, the source videos are optionally reduced to low-resolution or low-frame-rate ones to better meet with the limited band-width. In compression stage, inter-frame prediction is performed by deformable convolution (DCN). The predicted frame is then used as temporal conditions to compress the current frame. In post-processing stage, the decoded videos are fed into a Space-Time Super-Resolution module, in which the videos are restored to original spatial and temporal resolutions. Experimental results on CLIC22 video test conditions demonstrate that the proposed method shows better performance on both objective and subjective quality at low bitrate. Our team name is PKUSZ-LVC.

Cite

Text

Yang et al. "Learned Low Bitrate Video Compression with Space-Time Super-Resolution." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022. doi:10.1109/CVPRW56347.2022.00192

Markdown

[Yang et al. "Learned Low Bitrate Video Compression with Space-Time Super-Resolution." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022.](https://mlanthology.org/cvprw/2022/yang2022cvprw-learned/) doi:10.1109/CVPRW56347.2022.00192

BibTeX

@inproceedings{yang2022cvprw-learned,
  title     = {{Learned Low Bitrate Video Compression with Space-Time Super-Resolution}},
  author    = {Yang, Jiayu and Yang, Chunhui and Xiong, Fei and Wang, Feng and Wang, Ronggang},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2022},
  pages     = {1785-1789},
  doi       = {10.1109/CVPRW56347.2022.00192},
  url       = {https://mlanthology.org/cvprw/2022/yang2022cvprw-learned/}
}