MuiDial: Improving Dialogue Disentanglement with Intent-Based Mutual Learning
Abstract
The main goal of dialogue disentanglement is to separate the mixed utterances from a chat slice into independent dialogues. Existing models often utilize either an utterance-to-utterance (U2U) prediction to determine whether two utterances that have the “reply-to” relationship belong to one dialogue, or an utterance-to-thread (U2T) prediction to determine which dialogue-thread a given utterance should belong to. Inspired by mutual leaning, we propose MuiDial, a novel dialogue disentanglement model, to exploit the intent of each utterance and feed the intent to a mutual learning U2U-U2T disentanglement model. Experimental results and in-depth analysis on several benchmark datasets demonstrate the effectiveness and generalizability of our approach.
Cite
Text
Jiang et al. "MuiDial: Improving Dialogue Disentanglement with Intent-Based Mutual Learning." International Joint Conference on Artificial Intelligence, 2022. doi:10.24963/IJCAI.2022/578Markdown
[Jiang et al. "MuiDial: Improving Dialogue Disentanglement with Intent-Based Mutual Learning." International Joint Conference on Artificial Intelligence, 2022.](https://mlanthology.org/ijcai/2022/jiang2022ijcai-muidial/) doi:10.24963/IJCAI.2022/578BibTeX
@inproceedings{jiang2022ijcai-muidial,
title = {{MuiDial: Improving Dialogue Disentanglement with Intent-Based Mutual Learning}},
author = {Jiang, Ziyou and Shi, Lin and Chen, Celia and Mu, Fangwen and Zhang, Yumin and Wang, Qing},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2022},
pages = {4164-4170},
doi = {10.24963/IJCAI.2022/578},
url = {https://mlanthology.org/ijcai/2022/jiang2022ijcai-muidial/}
}