Learning Language Structures Through Grounding

Abstract

Language is highly structured, with syntactic and semantic structures, to some extent, agreed upon by speakers. With implicit or explicit awareness of such structures, humans can learn and use language efficiently and generalize to sentences that contain unseen words. Motivated by human language learning, in this presentation, I will introduce a family of machine learning tasks that learns language structures through grounding, where distant supervision from other data sources (i.e., grounds), including but not limited to different modalities (e.g., vision), execution results of programs, and other languages, are used to guide the learning of language structures. I will demonstrate the potential of this task formulation, advocate for its adoption through three schemes, and discuss the possibility of the general language learning problem through grounding.

Cite

Text

Shi. "Learning Language Structures Through Grounding." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I27.35119

Markdown

[Shi. "Learning Language Structures Through Grounding." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/shi2025aaai-learning/) doi:10.1609/AAAI.V39I27.35119

BibTeX

@inproceedings{shi2025aaai-learning,
  title     = {{Learning Language Structures Through Grounding}},
  author    = {Shi, Freda},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {28725},
  doi       = {10.1609/AAAI.V39I27.35119},
  url       = {https://mlanthology.org/aaai/2025/shi2025aaai-learning/}
}