Identifying and Controlling Important Neurons in Neural Machine Translation
Abstract
Neural machine translation (NMT) models learn representations containing substantial linguistic information. However, it is not clear if such information is fully distributed or if some of it can be attributed to individual neurons. We develop unsupervised methods for discovering important neurons in NMT models. Our methods rely on the intuition that different models learn similar properties, and do not require any costly external supervision. We show experimentally that translation quality depends on the discovered neurons, and find that many of them capture common linguistic phenomena. Finally, we show how to control NMT translations in predictable ways, by modifying activations of individual neurons.
Cite
Text
Bau et al. "Identifying and Controlling Important Neurons in Neural Machine Translation." International Conference on Learning Representations, 2019.Markdown
[Bau et al. "Identifying and Controlling Important Neurons in Neural Machine Translation." International Conference on Learning Representations, 2019.](https://mlanthology.org/iclr/2019/bau2019iclr-identifying/)BibTeX
@inproceedings{bau2019iclr-identifying,
title = {{Identifying and Controlling Important Neurons in Neural Machine Translation}},
author = {Bau, Anthony and Belinkov, Yonatan and Sajjad, Hassan and Durrani, Nadir and Dalvi, Fahim and Glass, James},
booktitle = {International Conference on Learning Representations},
year = {2019},
url = {https://mlanthology.org/iclr/2019/bau2019iclr-identifying/}
}