Few-Shotlearning with Weak Supervision

Abstract

Few-shot meta-learning methods aim to learn the common structure shared across a set of tasks to facilitate learning new tasks with small amounts of data. However, provided only a few training examples, many tasks are ambiguous. Such ambiguity can be mitigated with side information in terms of weak labels which is often readily available. In this paper, we propose a Bayesian gradient-based meta-learning algorithm that can incorporate weak labels to reduce task ambiguity and improve performance. Our approach is cast in the framework of amortized variational inference and trained by optimizing a variational lower bound. The proposed method is competitive to state-of-the-art methods and achieves significant performance gains in settings where weak labels are available.

Cite

Text

Ghadirzadeh et al. "Few-Shotlearning  with Weak Supervision." ICLR 2021 Workshops: Learning_to_Learn, 2021.

Markdown

[Ghadirzadeh et al. "Few-Shotlearning  with Weak Supervision." ICLR 2021 Workshops: Learning_to_Learn, 2021.](https://mlanthology.org/iclrw/2021/ghadirzadeh2021iclrw-fewshotlearning/)

BibTeX

@inproceedings{ghadirzadeh2021iclrw-fewshotlearning,
  title     = {{Few-Shotlearning  with Weak Supervision}},
  author    = {Ghadirzadeh, Ali and Poklukar, Petra and Chen, Xi and Yao, Huaxiu and Azizpour, Hossein and Björkman, Mårten and Finn, Chelsea and Kragic, Danica},
  booktitle = {ICLR 2021 Workshops: Learning_to_Learn},
  year      = {2021},
  url       = {https://mlanthology.org/iclrw/2021/ghadirzadeh2021iclrw-fewshotlearning/}
}