Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets

Abstract

Despite the recent achievements in machine learning, we are still very far from achieving real artificial intelligence. In this paper, we discuss the limitations of standard deep learning approaches and show that some of these limitations can be overcome by learning how to grow the complexity of a model in a structured way. Specifically, we study the simplest sequence prediction problems that are beyond the scope of what is learnable with standard recurrent networks, algorithmically generated sequences which can only be learned by models which have the capacity to count and to memorize sequences. We show that some basic algorithms can be learned from sequential data using a recurrent network associated with a trainable memory.

Cite

Text

Joulin and Mikolov. "Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets." Neural Information Processing Systems, 2015.

Markdown

[Joulin and Mikolov. "Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets." Neural Information Processing Systems, 2015.](https://mlanthology.org/neurips/2015/joulin2015neurips-inferring/)

BibTeX

@inproceedings{joulin2015neurips-inferring,
  title     = {{Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets}},
  author    = {Joulin, Armand and Mikolov, Tomas},
  booktitle = {Neural Information Processing Systems},
  year      = {2015},
  pages     = {190-198},
  url       = {https://mlanthology.org/neurips/2015/joulin2015neurips-inferring/}
}