Worst Case Matters for Few-Shot Recognition
Abstract
Few-shot recognition learns a recognition model with very few (e.g., 1 or 5) images per category, and current few-shot learning methods focus on improving the average accuracy over many episodes. We argue that in real-world applications we may often only try one episode instead of many, and hence maximizing the worst-case accuracy is more important than maximizing the average accuracy. We empirically show that a high average accuracy not necessarily means a high worst-case accuracy. Since this objective is not accessible, we propose to reduce the standard deviation and increase the average accuracy simultaneously. In turn, we devise two strategies from the bias-variance tradeoff perspective to implicitly reach this goal: a simple yet effective stability regularization (SR) loss together with model ensemble to reduce variance during fine-tuning, and an adaptability calibration mechanism to reduce the bias. Extensive experiments on benchmark datasets demonstrate the effectiveness of the proposed strategies, which outperforms current state-of-the-art methods with a significant margin in terms of not only average, but also worst-case accuracy.
Cite
Text
Fu et al. "Worst Case Matters for Few-Shot Recognition." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-20044-1_6Markdown
[Fu et al. "Worst Case Matters for Few-Shot Recognition." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/fu2022eccv-worst/) doi:10.1007/978-3-031-20044-1_6BibTeX
@inproceedings{fu2022eccv-worst,
title = {{Worst Case Matters for Few-Shot Recognition}},
author = {Fu, Minghao and Cao, Yun-Hao and Wu, Jianxin},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2022},
doi = {10.1007/978-3-031-20044-1_6},
url = {https://mlanthology.org/eccv/2022/fu2022eccv-worst/}
}