Prompt-Guided Disentangled Representation for Action Recognition
Abstract
Action recognition is a fundamental task in video understanding. Existing methods typically extract unified features to process all actions in one video, which makes it challenging to model the interactions between different objects in multi-action scenarios. To alleviate this issue, we explore disentangling any specified actions from complex scenes as an effective solution. In this paper, we propose Prompt-guided Disentangled Representation for Action Recognition (ProDA), a novel framework that disentangles any specified actions from a multi-action scene. ProDA leverages Spatio-temporal Scene Graphs (SSGs) and introduces Dynamic Prompt Module (DPM) to guide a Graph Parsing Neural Network (GPNN) in generating action-specific representations. Furthermore, we design a video-adapted GPNN that aggregates information using dynamic weights. Extensive experiments on two complex video action datasets, Charades and SportsHHI, demonstrate the effectiveness of our approach against state-of-the-art methods. Our code can be found in https://github.com/iamsnaping/ProDA.git.
Cite
Text
Wu et al. "Prompt-Guided Disentangled Representation for Action Recognition." Advances in Neural Information Processing Systems, 2025.Markdown
[Wu et al. "Prompt-Guided Disentangled Representation for Action Recognition." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/wu2025neurips-promptguided/)BibTeX
@inproceedings{wu2025neurips-promptguided,
title = {{Prompt-Guided Disentangled Representation for Action Recognition}},
author = {Wu, Tianci and Zhu, Guangming and Jiang, Lu and Wang, Siyuan and Wang, Ning and Xiong, Nuoye and Liang, Zhang},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/wu2025neurips-promptguided/}
}