Learning Automata from Demonstrations, Examples, and Natural Language
Abstract
Expert demonstrations have proven an easy way to indirectly specify complex tasks. Recent algorithms even support extracting unambiguous formal specifications, e.g. deterministic finite automata (DFA), from demonstrations. Unfortunately, these techniques are generally not sample-efficient. In this work, we introduce $L^\star LM$, an algorithm for learning DFAs from both demonstrations \emph{and} natural language. Due to the expressivity of natural language, we observe a significant improvement in the data efficiency of learning DFAs from expert demonstrations. Technically, $L^\star LM$ leverages large language models to answer membership queries about the underlying task. This is then combined with recent techniques for transforming learning from demonstrations into a sequence of labeled example learning problems. In our experiments, we observe the two modalities complement each other, yielding a powerful few-shot learner.
Cite
Text
Vazquez-Chanlatte et al. "Learning Automata from Demonstrations, Examples, and Natural Language." ICLR 2025 Workshops: BuildingTrust, 2025.Markdown
[Vazquez-Chanlatte et al. "Learning Automata from Demonstrations, Examples, and Natural Language." ICLR 2025 Workshops: BuildingTrust, 2025.](https://mlanthology.org/iclrw/2025/vazquezchanlatte2025iclrw-learning/)BibTeX
@inproceedings{vazquezchanlatte2025iclrw-learning,
title = {{Learning Automata from Demonstrations, Examples, and Natural Language}},
author = {Vazquez-Chanlatte, Marcell and Elmaaroufi, Karim and Witwicki, Stefan and Zaharia, Matei and Seshia, Sanjit A.},
booktitle = {ICLR 2025 Workshops: BuildingTrust},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/vazquezchanlatte2025iclrw-learning/}
}