Deep Video Codec Control for Vision Models

Abstract

Standardized lossy video coding is at the core of almost all real-world video processing pipelines. Rate control is used to enable standard codecs to adapt to different network bandwidth conditions or storage constraints. However, standard video codecs (e.g., H.264) and their rate control modules aim to minimize video distortion w.r.t. human quality assessment. We demonstrate empirically that standard-coded videos vastly deteriorate the performance of deep vision models. To overcome the deterioration of vision performance, this paper presents the first end-to-end learnable deep video codec control that considers both bandwidth constraints and downstream deep vision performance, while adhering to existing standardization. We demonstrate that our approach better preserves downstream deep vision performance than traditional standard video coding.

Cite

Text

Reich et al. "Deep Video Codec Control for Vision Models." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024. doi:10.1109/CVPRW63382.2024.00582

Markdown

[Reich et al. "Deep Video Codec Control for Vision Models." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024.](https://mlanthology.org/cvprw/2024/reich2024cvprw-deep/) doi:10.1109/CVPRW63382.2024.00582

BibTeX

@inproceedings{reich2024cvprw-deep,
  title     = {{Deep Video Codec Control for Vision Models}},
  author    = {Reich, Christoph and Debnath, Biplob and Patel, Deep and Prangemeier, Tim and Cremers, Daniel and Chakradhar, Srimat},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2024},
  pages     = {5732-5741},
  doi       = {10.1109/CVPRW63382.2024.00582},
  url       = {https://mlanthology.org/cvprw/2024/reich2024cvprw-deep/}
}