Neural Networks and the Chomsky Hierarchy
Abstract
Reliable generalization lies at the heart of safe ML and AI. However, understanding when and how neural networks generalize remains one of the most important unsolved problems in the field. In this work, we conduct an extensive empirical study (20'910 models, 15 tasks) to investigate whether insights from the theory of computation can predict the limits of neural network generalization in practice. We demonstrate that grouping tasks according to the Chomsky hierarchy allows us to forecast whether certain architectures will be able to generalize to out-of-distribution inputs. This includes negative results where even extensive amounts of data and training time never lead to any non-trivial generalization, despite models having sufficient capacity to fit the training data perfectly. Our results show that, for our subset of tasks, RNNs and Transformers fail to generalize on non-regular tasks, LSTMs can solve regular and counter-language tasks, and only networks augmented with structured memory (such as a stack or memory tape) can successfully generalize on context-free and context-sensitive tasks.
Cite
Text
Deletang et al. "Neural Networks and the Chomsky Hierarchy." International Conference on Learning Representations, 2023.Markdown
[Deletang et al. "Neural Networks and the Chomsky Hierarchy." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/deletang2023iclr-neural/)BibTeX
@inproceedings{deletang2023iclr-neural,
title = {{Neural Networks and the Chomsky Hierarchy}},
author = {Deletang, Gregoire and Ruoss, Anian and Grau-Moya, Jordi and Genewein, Tim and Wenliang, Li Kevin and Catt, Elliot and Cundy, Chris and Hutter, Marcus and Legg, Shane and Veness, Joel and Ortega, Pedro A},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/deletang2023iclr-neural/}
}