AttnSense: Multi-Level Attention Mechanism for Multimodal Human Activity Recognition
Abstract
Sensor-based human activity recognition is a fundamental research problem in ubiquitous computing, which uses the rich sensing data from multimodal embedded sensors such as accelerometer and gyroscope to infer human activities. The existing activity recognition approaches either rely on domain knowledge or fail to address the spatial-temporal dependencies of the sensing signals. In this paper, we propose a novel attention-based multimodal neural network model called AttnSense for multimodal human activity recognition. AttnSense introduce the framework of combining attention mechanism with a convolutional neural network (CNN) and a Gated Recurrent Units (GRU) network to capture the dependencies of sensing signals in both spatial and temporal domains, which shows advantages in prioritized sensor selection and improves the comprehensibility. Extensive experiments based on three public datasets show that AttnSense achieves a competitive performance in activity recognition compared with several state-of-the-art methods.
Cite
Text
Ma et al. "AttnSense: Multi-Level Attention Mechanism for Multimodal Human Activity Recognition." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/431Markdown
[Ma et al. "AttnSense: Multi-Level Attention Mechanism for Multimodal Human Activity Recognition." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/ma2019ijcai-attnsense/) doi:10.24963/IJCAI.2019/431BibTeX
@inproceedings{ma2019ijcai-attnsense,
title = {{AttnSense: Multi-Level Attention Mechanism for Multimodal Human Activity Recognition}},
author = {Ma, Haojie and Li, Wenzhong and Zhang, Xiao and Gao, Songcheng and Lu, Sanglu},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2019},
pages = {3109-3115},
doi = {10.24963/IJCAI.2019/431},
url = {https://mlanthology.org/ijcai/2019/ma2019ijcai-attnsense/}
}