Infinite Mixture Prototypes for Few-Shot Learning
Abstract
We propose infinite mixture prototypes to adaptively represent both simple and complex data distributions for few-shot learning. Infinite mixture prototypes combine deep representation learning with Bayesian nonparametrics, representing each class by a set of clusters, unlike existing prototypical methods that represent each class by a single cluster. By inferring the number of clusters, infinite mixture prototypes interpolate between nearest neighbor and prototypical representations in a learned feature space, which improves accuracy and robustness in the few-shot regime. We show the importance of adaptive capacity for capturing complex data distributions such as super-classes (like alphabets in character recognition), with 10-25% absolute accuracy improvements over prototypical networks, while still maintaining or improving accuracy on standard few-shot learning benchmarks. By clustering labeled and unlabeled data with the same rule, infinite mixture prototypes achieve state-of-the-art semi-supervised accuracy, and can perform purely unsupervised clustering, unlike existing fully- and semi-supervised prototypical methods.
Cite
Text
Allen et al. "Infinite Mixture Prototypes for Few-Shot Learning." International Conference on Machine Learning, 2019.Markdown
[Allen et al. "Infinite Mixture Prototypes for Few-Shot Learning." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/allen2019icml-infinite/)BibTeX
@inproceedings{allen2019icml-infinite,
title = {{Infinite Mixture Prototypes for Few-Shot Learning}},
author = {Allen, Kelsey and Shelhamer, Evan and Shin, Hanul and Tenenbaum, Joshua},
booktitle = {International Conference on Machine Learning},
year = {2019},
pages = {232-241},
volume = {97},
url = {https://mlanthology.org/icml/2019/allen2019icml-infinite/}
}