OD-DETR: Online Distillation for Stabilizing Training of Detection Transformer
Abstract
Video action understanding tasks in real-world scenarios often suffer from data limitations. In this paper, we address the data-limited action understanding problem by bridging data scarcity. We propose a novel method that leverages a text-to-video diffusion transformer to generate annotated data for model training. This paradigm enables the generation of realistic annotated data on an infinite scale without human intervention. We proposed the Information Enhancement Strategy and the Uncertainty-Based Soft Target tailored to generate sample training. Through quantitative and qualitative analyzes, we discovered that real samples generally contain a richer level of information compared to generated samples. Based on this observation, the information enhancement strategy was designed to enhance the informational content of the generated samples from two perspectives: the environment and the character. Furthermore, we observed that a portion of low-quality generated samples might negatively affect model training. To address this, we devised an uncertainty-based label-smoothing strategy to increase the smoothing of these low-quality samples, thereby reducing their impact. We demonstrate the effectiveness of the proposed method on four datasets and five tasks, and achieve state-of-the-art performance for zero-shot action recognition.
Cite
Text
Wu et al. "OD-DETR: Online Distillation for Stabilizing Training of Detection Transformer." International Joint Conference on Artificial Intelligence, 2024. doi:10.24963/ijcai.2024/160Markdown
[Wu et al. "OD-DETR: Online Distillation for Stabilizing Training of Detection Transformer." International Joint Conference on Artificial Intelligence, 2024.](https://mlanthology.org/ijcai/2024/wu2024ijcai-od/) doi:10.24963/ijcai.2024/160BibTeX
@inproceedings{wu2024ijcai-od,
title = {{OD-DETR: Online Distillation for Stabilizing Training of Detection Transformer}},
author = {Wu, Shengjian and Sun, Li and Li, Qingli},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2024},
pages = {1443-1451},
doi = {10.24963/ijcai.2024/160},
url = {https://mlanthology.org/ijcai/2024/wu2024ijcai-od/}
}