Sparsity Emerges Naturally in Neural Language Models

Abstract

Concerns about interpretability, computational resources, and principled inductive priors have motivated efforts to engineer sparse neural models for NLP tasks. If sparsity is important for NLP, might well-trained neural models naturally become roughly sparse? Using the Taxi-Euclidean norm to measure sparsity, we find that frequent input words are associated with concentrated or sparse activations, while frequent target words are associated with dispersed activations but concentrated gradients. We find that gradients associated with function words are more concentrated than the gradients of content words, even controlling for word frequency.

Cite

Text

Saphra and Lopez. "Sparsity Emerges Naturally in Neural Language Models." ICML 2019 Workshops: Deep_Phenomena, 2019.

Markdown

[Saphra and Lopez. "Sparsity Emerges Naturally in Neural Language Models." ICML 2019 Workshops: Deep_Phenomena, 2019.](https://mlanthology.org/icmlw/2019/saphra2019icmlw-sparsity/)

BibTeX

@inproceedings{saphra2019icmlw-sparsity,
  title     = {{Sparsity Emerges Naturally in Neural Language Models}},
  author    = {Saphra, Naomi and Lopez, Adam},
  booktitle = {ICML 2019 Workshops: Deep_Phenomena},
  year      = {2019},
  url       = {https://mlanthology.org/icmlw/2019/saphra2019icmlw-sparsity/}
}