In-Game Toxic Language Detection: Shared Task and Attention Residuals (Student Abstract)

Abstract

In-game toxic language becomes the hot potato in the gaming industry and community. There have been several online game toxicity analysis frameworks and models proposed. However, it is still challenging to detect toxicity due to the nature of in-game chat, which has extremely short length. In this paper, we describe how the in-game toxic language shared task has been established using the real-world in-game chat data. In addition, we propose and introduce the model/framework for toxic language token tagging (slot filling) from the in-game chat. The data and code will be released.

Cite

Text

Jia et al. "In-Game Toxic Language Detection: Shared Task and Attention Residuals (Student Abstract)." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I13.26979

Markdown

[Jia et al. "In-Game Toxic Language Detection: Shared Task and Attention Residuals (Student Abstract)." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/jia2023aaai-game/) doi:10.1609/AAAI.V37I13.26979

BibTeX

@inproceedings{jia2023aaai-game,
  title     = {{In-Game Toxic Language Detection: Shared Task and Attention Residuals (Student Abstract)}},
  author    = {Jia, Yuanzhe and Wu, Weixuan and Cao, Feiqi and Han, Soyeon Caren},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {16238-16239},
  doi       = {10.1609/AAAI.V37I13.26979},
  url       = {https://mlanthology.org/aaai/2023/jia2023aaai-game/}
}