Learning to Compose Words into Sentences with Reinforcement Learning
Abstract
We use reinforcement learning to learn tree-structured neural networks for computing representations of natural language sentences. In contrast with prior work on tree-structured models in which the trees are either provided as input or predicted using supervision from explicit treebank annotations, the tree structures in this work are optimized to improve performance on a downstream task. Experiments demonstrate the benefit of learning task-specific composition orders, outperforming both sequential encoders and recursive encoders based on treebank annotations. We analyze the induced trees and show that while they discover some linguistically intuitive structures (e.g., noun phrases, simple verb phrases), they are different than conventional English syntactic structures.
Cite
Text
Yogatama et al. "Learning to Compose Words into Sentences with Reinforcement Learning." International Conference on Learning Representations, 2017.Markdown
[Yogatama et al. "Learning to Compose Words into Sentences with Reinforcement Learning." International Conference on Learning Representations, 2017.](https://mlanthology.org/iclr/2017/yogatama2017iclr-learning/)BibTeX
@inproceedings{yogatama2017iclr-learning,
title = {{Learning to Compose Words into Sentences with Reinforcement Learning}},
author = {Yogatama, Dani and Blunsom, Phil and Dyer, Chris and Grefenstette, Edward and Ling, Wang},
booktitle = {International Conference on Learning Representations},
year = {2017},
url = {https://mlanthology.org/iclr/2017/yogatama2017iclr-learning/}
}