Multi-Agent Attentional Activity Recognition
Abstract
Multi-modality is an important feature of sensor based activity recognition. In this work, we consider two inherent characteristics of human activities, the spatially-temporally varying salience of features and the relations between activities and corresponding body part motions. Based on these, we propose a multi-agent spatial-temporal attention model. The spatial-temporal attention mechanism helps intelligently select informative modalities and their active periods. And the multiple agents in the proposed model represent activities with collective motions across body parts by independently selecting modalities associated with single motions. With a joint recognition goal, the agents share gained information and coordinate their selection policies to learn the optimal recognition model. The experimental results on four real-world datasets demonstrate that the proposed model outperforms the state-of-the-art methods.
Cite
Text
Chen et al. "Multi-Agent Attentional Activity Recognition." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/186Markdown
[Chen et al. "Multi-Agent Attentional Activity Recognition." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/chen2019ijcai-multi/) doi:10.24963/IJCAI.2019/186BibTeX
@inproceedings{chen2019ijcai-multi,
title = {{Multi-Agent Attentional Activity Recognition}},
author = {Chen, Kaixuan and Yao, Lina and Zhang, Dalin and Guo, Bin and Yu, Zhiwen},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2019},
pages = {1344-1350},
doi = {10.24963/IJCAI.2019/186},
url = {https://mlanthology.org/ijcai/2019/chen2019ijcai-multi/}
}