STMixer: A One-Stage Sparse Action Detector

Abstract

Traditional video action detectors typically adopt the two-stage pipeline, where a person detector is first employed to yield actor boxes and then 3D RoIAlign is used to extract actor-specific features for classification. This detection paradigm requires multi-stage training and inference and cannot capture context information outside the bounding box. Recently, a few query-based action detectors are proposed to predict action instances in an end-to-end manner. However, they still lack adaptability in feature sampling or decoding, thus suffering from the issue of inferior performance or slower convergence. In this paper, we propose a new one-stage sparse action detector, termed STMixer. STMixer is based on two core designs. First, we present a query-based adaptive feature sampling module, which endows our STMixer with the flexibility of mining a set of discriminative features from the entire spatiotemporal domain. Second, we devise a dual-branch feature mixing module, which allows our STMixer to dynamically attend to and mix video features along the spatial and the temporal dimension respectively for better feature decoding. Coupling these two designs with a video backbone yields an efficient and accurate action detector. Without bells and whistles, STMixer obtains the state-of-the-art results on the datasets of AVA, UCF101-24, and JHMDB.

Cite

Text

Wu et al. "STMixer: A One-Stage Sparse Action Detector." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.01414

Markdown

[Wu et al. "STMixer: A One-Stage Sparse Action Detector." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/wu2023cvpr-stmixer/) doi:10.1109/CVPR52729.2023.01414

BibTeX

@inproceedings{wu2023cvpr-stmixer,
  title     = {{STMixer: A One-Stage Sparse Action Detector}},
  author    = {Wu, Tao and Cao, Mengqi and Gao, Ziteng and Wu, Gangshan and Wang, Limin},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2023},
  pages     = {14720-14729},
  doi       = {10.1109/CVPR52729.2023.01414},
  url       = {https://mlanthology.org/cvpr/2023/wu2023cvpr-stmixer/}
}