Classification Matters: Improving Video Action Detection with Class-Specific Attention

Abstract

Video action detection (VAD) aims to detect actors and classify their actions in a video. We figure that VAD suffers more from classification rather than localization of actors. Hence, we analyze how prevailing methods form features for classification and find that they prioritize actor regions, yet often overlooking the essential contextual information necessary for accurate classification. Accordingly, we propose to reduce the bias toward actor and encourage paying attention to the context that is relevant to each action class. By assigning a class-dedicated query to each action class, our model can dynamically determine where to focus for effective classification. The proposed model demonstrates superior performance on three challenging benchmarks with significantly fewer parameters and less computation.

Cite

Text

Lee et al. "Classification Matters: Improving Video Action Detection with Class-Specific Attention." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72661-3_26

Markdown

[Lee et al. "Classification Matters: Improving Video Action Detection with Class-Specific Attention." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/lee2024eccv-classification/) doi:10.1007/978-3-031-72661-3_26

BibTeX

@inproceedings{lee2024eccv-classification,
  title     = {{Classification Matters: Improving Video Action Detection with Class-Specific Attention}},
  author    = {Lee, Jinsung and Kim, Taeoh and Lee, Inwoong and Shim, Minho and Wee, Dongyoon and Cho, Minsu and Kwak, Suha},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2024},
  doi       = {10.1007/978-3-031-72661-3_26},
  url       = {https://mlanthology.org/eccv/2024/lee2024eccv-classification/}
}