Explaining Large Language Model-Based Neural Semantic Parsers (Student Abstract)

Abstract

While large language models (LLMs) have demonstrated strong capability in structured prediction tasks such as semantic parsing, few amounts of research have explored the underlying mechanisms of their success. Our work studies different methods for explaining an LLM-based semantic parser and qualitatively discusses the explained model behaviors, hoping to inspire future research toward better understanding them.

Cite

Text

Rai et al. "Explaining Large Language Model-Based Neural Semantic Parsers (Student Abstract)." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I13.27014

Markdown

[Rai et al. "Explaining Large Language Model-Based Neural Semantic Parsers (Student Abstract)." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/rai2023aaai-explaining/) doi:10.1609/AAAI.V37I13.27014

BibTeX

@inproceedings{rai2023aaai-explaining,
  title     = {{Explaining Large Language Model-Based Neural Semantic Parsers (Student Abstract)}},
  author    = {Rai, Daking and Zhou, Yilun and Wang, Bailin and Yao, Ziyu},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {16308-16309},
  doi       = {10.1609/AAAI.V37I13.27014},
  url       = {https://mlanthology.org/aaai/2023/rai2023aaai-explaining/}
}