ACGNet: Action Complement Graph Network for Weakly-Supervised Temporal Action Localization
Abstract
Weakly-supervised temporal action localization (WTAL) in untrimmed videos has emerged as a practical but challenging task since only video-level labels are available. Existing approaches typically leverage off-the-shelf segment-level features, which suffer from spatial incompleteness and temporal incoherence, thus limiting their performance. In this paper, we tackle this problem from a new perspective by enhancing segment-level representations with a simple yet effective graph convolutional network, namely action complement graph network (ACGNet). It facilitates the current video segment to perceive spatial-temporal dependencies from others that potentially convey complementary clues, implicitly mitigating the negative effects caused by the two issues above. By this means, the segment-level features are more discriminative and robust to spatial-temporal variations, contributing to higher localization accuracies. More importantly, the proposed ACGNet works as a universal module that can be flexibly plugged into different WTAL frameworks, while maintaining the end-to-end training fashion. Extensive experiments are conducted on the THUMOS'14 and ActivityNet1.2 benchmarks, where the state-of-the-art results clearly demonstrate the superiority of the proposed approach.
Cite
Text
Yang et al. "ACGNet: Action Complement Graph Network for Weakly-Supervised Temporal Action Localization." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I3.20216Markdown
[Yang et al. "ACGNet: Action Complement Graph Network for Weakly-Supervised Temporal Action Localization." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/yang2022aaai-acgnet/) doi:10.1609/AAAI.V36I3.20216BibTeX
@inproceedings{yang2022aaai-acgnet,
title = {{ACGNet: Action Complement Graph Network for Weakly-Supervised Temporal Action Localization}},
author = {Yang, Zichen and Qin, Jie and Huang, Di},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2022},
pages = {3090-3098},
doi = {10.1609/AAAI.V36I3.20216},
url = {https://mlanthology.org/aaai/2022/yang2022aaai-acgnet/}
}