Learning to Learn Words from Visual Scenes
Abstract
Language acquisition is the process of learning words from the surrounding scene. We introduce a meta-learning framework that mph{learns how to learn} word representations from unconstrained scenes. We leverage the natural compositional structure of language to create training episodes that cause a meta-learner to learn strong policies for language acquisition. Experiments on two datasets show that our approach is able to more rapidly acquire novel words as well as more robustly generalize to unseen compositions, significantly outperforming established baselines. A key advantage of our approach is that it is data efficient, allowing representations to be learned from scratch without language pre-training. Visualizations and analysis suggest visual information helps our approach learn a rich cross-modal representation from minimal examples.
Cite
Text
Surís et al. "Learning to Learn Words from Visual Scenes." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58526-6_26Markdown
[Surís et al. "Learning to Learn Words from Visual Scenes." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/suris2020eccv-learning/) doi:10.1007/978-3-030-58526-6_26BibTeX
@inproceedings{suris2020eccv-learning,
title = {{Learning to Learn Words from Visual Scenes}},
author = {Surís, Dídac and Epstein, Dave and Ji, Heng and Chang, Shih-Fu and Vondrick, Carl},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2020},
doi = {10.1007/978-3-030-58526-6_26},
url = {https://mlanthology.org/eccv/2020/suris2020eccv-learning/}
}