Cutting Recursive Autoencoder Trees

Abstract

Deep Learning models enjoy considerable success in Natural Language Processing. While deep architectures produce useful representations that lead to improvements in various tasks, they are often difficult to interpret. This makes the analysis of learned structures particularly difficult. In this paper, we rely on empirical tests to see whether a particular structure makes sense. We present an analysis of the Semi-Supervised Recursive Autoencoder, a well-known model that produces structural representations of text. We show that for certain tasks, the structure of the autoencoder can be significantly reduced without loss of classification accuracy and we evaluate the produced structures using human judgment.

Cite

Text

Scheible and Schütze. "Cutting Recursive Autoencoder Trees." International Conference on Learning Representations, 2013.

Markdown

[Scheible and Schütze. "Cutting Recursive Autoencoder Trees." International Conference on Learning Representations, 2013.](https://mlanthology.org/iclr/2013/scheible2013iclr-cutting/)

BibTeX

@inproceedings{scheible2013iclr-cutting,
  title     = {{Cutting Recursive Autoencoder Trees}},
  author    = {Scheible, Christian and Schütze, Hinrich},
  booktitle = {International Conference on Learning Representations},
  year      = {2013},
  url       = {https://mlanthology.org/iclr/2013/scheible2013iclr-cutting/}
}