A Family of Cognitively Realistic Parsing Environments for Deep Reinforcement Learning

Abstract

The hierarchical syntactic structure of natural language is a key feature of human cognition that enables us to recursively construct arbitrarily long sentences, supporting communication of complex, relational information. In this work, we describe a framework in which learning cognitively realistic left-corner parsers can be formalized as a Reinforcement Learning problem, and introduce a family of cognitively realistic chart-parsing environments to evaluate potential psycholinguistic implications of RL algorithms. We report how several baseline Q-learning and Actor Critic algorithms, both tabular and neural, perform on subsets of the Penn Treebank corpus. We observe a sharp increase in difficulty as parse trees get slightly more complex, indicating that hierarchical reinforcement learning might be required to solve this family of environments.

Cite

Text

Brasoveanu et al. "A Family of Cognitively Realistic Parsing Environments for Deep Reinforcement Learning." NeurIPS 2021 Workshops: DeepRL, 2021.

Markdown

[Brasoveanu et al. "A Family of Cognitively Realistic Parsing Environments for Deep Reinforcement Learning." NeurIPS 2021 Workshops: DeepRL, 2021.](https://mlanthology.org/neuripsw/2021/brasoveanu2021neuripsw-family/)

BibTeX

@inproceedings{brasoveanu2021neuripsw-family,
  title     = {{A Family of Cognitively Realistic Parsing Environments for Deep Reinforcement Learning}},
  author    = {Brasoveanu, Adrian and Pandey, Rohan and Alfano-Smith, Maximilian Emerson},
  booktitle = {NeurIPS 2021 Workshops: DeepRL},
  year      = {2021},
  url       = {https://mlanthology.org/neuripsw/2021/brasoveanu2021neuripsw-family/}
}