Evaluating the Evaluators: Are Validation Methods for Few-Shot Learning Fit for Purpose?

Abstract

Numerous benchmarks for Few-Shot Learning have been proposed in the last decade. However all of these benchmarks focus on performance averaged over many tasks, and the question of how to reliably evaluate and tune models trained for individual few-shot tasks has not been addressed. This paper presents the first investigation into task-level validation---a fundamental step when deploying a model. We measure the accuracy of performance estimators in the few-shot setting, consider strategies for model selection, and examine the reasons for the failure of evaluators usually thought of as being robust. We conclude that cross-validation with a low number of folds is the best choice for directly estimating the performance of a model, whereas using bootstrapping or cross validation with a large number of folds is better for model selection purposes. Overall, we find that with current methods, benchmarks, and validation strategies, one can not get a reliable picture of how effectively methods perform on individual tasks. However, we find that existing methods already provide enough information to enable selection of few-shot learners on a task-level basis.

Cite

Text

Shimabucoro et al. "Evaluating the Evaluators: Are Validation Methods for Few-Shot Learning Fit for Purpose?." Transactions on Machine Learning Research, 2024.

Markdown

[Shimabucoro et al. "Evaluating the Evaluators: Are Validation Methods for Few-Shot Learning Fit for Purpose?." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/shimabucoro2024tmlr-evaluating/)

BibTeX

@article{shimabucoro2024tmlr-evaluating,
  title     = {{Evaluating the Evaluators: Are Validation Methods for Few-Shot Learning Fit for Purpose?}},
  author    = {Shimabucoro, Luísa and Chavhan, Ruchika and Hospedales, Timothy and Gouk, Henry},
  journal   = {Transactions on Machine Learning Research},
  year      = {2024},
  url       = {https://mlanthology.org/tmlr/2024/shimabucoro2024tmlr-evaluating/}
}