Visual Recognition in Very Low-Quality Settings: Delving into the Power of Pre-Training
Abstract
Visual recognition from very low-quality images is an extremely challenging task with great practical values. While deep networks have been extensively applied to low-quality image restoration and high-quality image recognition tasks respectively, few works have been done on the important problem of recognition from very low-quality images.This paper presents a degradation-robust pre-training approach on improving deep learning models towards this direction. Extensive experiments on different datasets validate the effectiveness of our proposed method.
Cite
Text
Cheng et al. "Visual Recognition in Very Low-Quality Settings: Delving into the Power of Pre-Training." AAAI Conference on Artificial Intelligence, 2018. doi:10.1609/AAAI.V32I1.12131Markdown
[Cheng et al. "Visual Recognition in Very Low-Quality Settings: Delving into the Power of Pre-Training." AAAI Conference on Artificial Intelligence, 2018.](https://mlanthology.org/aaai/2018/cheng2018aaai-visual/) doi:10.1609/AAAI.V32I1.12131BibTeX
@inproceedings{cheng2018aaai-visual,
title = {{Visual Recognition in Very Low-Quality Settings: Delving into the Power of Pre-Training}},
author = {Cheng, Bowen and Liu, Ding and Wang, Zhangyang and Zhang, Haichao and Huang, Thomas S.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2018},
pages = {8065-8066},
doi = {10.1609/AAAI.V32I1.12131},
url = {https://mlanthology.org/aaai/2018/cheng2018aaai-visual/}
}