Learning to Imagine: Distillation-Based Interactive Context Exploitation for Dialogue State Tracking

Abstract

In dialogue state tracking (DST), the exploitation of dialogue history is a crucial research direction, and the existing DST models can be divided into two categories: full-history models and partial-history models. Since the “select first, use later” mechanism explicitly filters the distracting information being passed to the downstream state prediction, the partial-history models have recently achieved a performance advantage over the full-history models. However, besides the redundant information, some critical dialogue context information was inevitably filtered out by the partial-history models simultaneously. To reconcile the contextual consideration with avoiding the introduction of redundant information, we propose DICE-DST, a model-agnostic module widely applicable to the partial-history DST models, which aims to strengthen the ability of context exploitation for the encoder of each DST model. Specifically, we first construct a teacher encoder and devise two contextual reasoning tasks to train it to acquire extensive dialogue contextual knowledge. Then we transfer the contextual knowledge from the teacher encoder to the student encoder via a novel turn-level attention-alignment distillation. Experimental results show that our approach extensively improves the performance of partial-history DST models and thereby achieves new state-of-the-art performance on multiple mainstream datasets while keeping high efficiency.

Cite

Text

Guo et al. "Learning to Imagine: Distillation-Based Interactive Context Exploitation for Dialogue State Tracking." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I11.26510

Markdown

[Guo et al. "Learning to Imagine: Distillation-Based Interactive Context Exploitation for Dialogue State Tracking." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/guo2023aaai-learning/) doi:10.1609/AAAI.V37I11.26510

BibTeX

@inproceedings{guo2023aaai-learning,
  title     = {{Learning to Imagine: Distillation-Based Interactive Context Exploitation for Dialogue State Tracking}},
  author    = {Guo, Jinyu and Shuang, Kai and Zhang, Kaihang and Liu, Yixuan and Li, Jijie and Wang, Zihan},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {12845-12853},
  doi       = {10.1609/AAAI.V37I11.26510},
  url       = {https://mlanthology.org/aaai/2023/guo2023aaai-learning/}
}