Self-Feature Distillation with Uncertainty Modeling for Degraded Image Recognition

Abstract

Despite the remarkable performance on high-quality (HQ) data, the accuracy of deep image recognition models degrades rapidly in the presence of low-quality (LQ) images. Both feature de-drifting and quality agnostic models have been developed to make the features extracted from degraded images closer to those of HQ images. In these methods, the L2-norm is usually used as a constraint. It treats each pixel in the feature equally and may result in relatively poor reconstruction performance in some difficult regions. To address this issue, we propose a novel self-feature distillation method with uncertainty modeling for better producing HQ-like features from low-quality observations in this paper. Specifically, in a standard recognition model, we use the HQ features to distill the corresponding degraded ones and conduct uncertainty modeling according to the diversity of degradation sources to adaptively increase the weights of feature regions that are difficult to recover in the distillation loss. Experiments demonstrate that our method can extract HQ-like features better even when the inputs are degraded images, which makes the model more robust than other approaches.

Cite

Text

Yang et al. "Self-Feature Distillation with Uncertainty Modeling for Degraded Image Recognition." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-20053-3_32

Markdown

[Yang et al. "Self-Feature Distillation with Uncertainty Modeling for Degraded Image Recognition." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/yang2022eccv-selffeature/) doi:10.1007/978-3-031-20053-3_32

BibTeX

@inproceedings{yang2022eccv-selffeature,
  title     = {{Self-Feature Distillation with Uncertainty Modeling for Degraded Image Recognition}},
  author    = {Yang, Zhou and Dong, Weisheng and Li, Xin and Wu, Jinjian and Li, Leida and Shi, Guangming},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2022},
  doi       = {10.1007/978-3-031-20053-3_32},
  url       = {https://mlanthology.org/eccv/2022/yang2022eccv-selffeature/}
}