GLoMo: Unsupervised Learning of Transferable Relational Graphs

Abstract

Modern deep transfer learning approaches have mainly focused on learning generic feature vectors from one task that are transferable to other tasks, such as word embeddings in language and pretrained convolutional features in vision. However, these approaches usually transfer unary features and largely ignore more structured graphical representations. This work explores the possibility of learning generic latent relational graphs that capture dependencies between pairs of data units (e.g., words or pixels) from large-scale unlabeled data and transferring the graphs to downstream tasks. Our proposed transfer learning framework improves performance on various tasks including question answering, natural language inference, sentiment analysis, and image classification. We also show that the learned graphs are generic enough to be transferred to different embeddings on which the graphs have not been trained (including GloVe embeddings, ELMo embeddings, and task-specific RNN hidden units), or embedding-free units such as image pixels.

Cite

Text

Yang et al. "GLoMo: Unsupervised Learning of Transferable Relational Graphs." Neural Information Processing Systems, 2018.

Markdown

[Yang et al. "GLoMo: Unsupervised Learning of Transferable Relational Graphs." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/yang2018neurips-glomo/)

BibTeX

@inproceedings{yang2018neurips-glomo,
  title     = {{GLoMo: Unsupervised Learning of Transferable Relational Graphs}},
  author    = {Yang, Zhilin and Zhao, Jake and Dhingra, Bhuwan and He, Kaiming and Cohen, William W. and Salakhutdinov, Ruslan and LeCun, Yann},
  booktitle = {Neural Information Processing Systems},
  year      = {2018},
  pages     = {8950-8961},
  url       = {https://mlanthology.org/neurips/2018/yang2018neurips-glomo/}
}