Tree-Structured Decoding with Doubly-Recurrent Neural Networks
Abstract
We propose a neural network architecture for generating tree-structured objects from encoded representations. The core of the method is a doubly-recurrent neural network that models separately the width and depth recurrences across the tree, and combines them inside each cell to generate an output. The topology of the tree is explicitly modeled, allowing the network to predict both content and topology of the tree when decoding. That is, given only an encoded vector representation, the network is able to simultaneously generate a tree from it and predict labels for the nodes. We test this architecture in an encoder-decoder framework, where we train a network to encode a sentence as a vector, and then generate a tree structure from it. The experimental results show the effectiveness of this architecture at recovering latent tree structure in sequences and at mapping sentences to simple functional programs.
Cite
Text
Alvarez-Melis and Jaakkola. "Tree-Structured Decoding with Doubly-Recurrent Neural Networks." International Conference on Learning Representations, 2017.Markdown
[Alvarez-Melis and Jaakkola. "Tree-Structured Decoding with Doubly-Recurrent Neural Networks." International Conference on Learning Representations, 2017.](https://mlanthology.org/iclr/2017/alvarezmelis2017iclr-tree/)BibTeX
@inproceedings{alvarezmelis2017iclr-tree,
title = {{Tree-Structured Decoding with Doubly-Recurrent Neural Networks}},
author = {Alvarez-Melis, David and Jaakkola, Tommi S.},
booktitle = {International Conference on Learning Representations},
year = {2017},
url = {https://mlanthology.org/iclr/2017/alvarezmelis2017iclr-tree/}
}