Two-Stream Convolution Augmented Transformer for Human Activity Recognition

Abstract

Recognition of human activities is an important task due to its far-reaching applications such as healthcare system, context-aware applications, and security monitoring. Recently, WiFi based human activity recognition (HAR) is becoming ubiquitous due to its non-invasiveness. Existing WiFi-based HAR methods regard WiFi signals as a temporal sequence of channel state information (CSI), and employ deep sequential models (e.g., RNN, LSTM) to automatically capture channel-over-time features. Although being remarkably effective, they suffer from two major drawbacks. Firstly, the granularity of a single temporal point is blindly elementary for representing meaningful CSI patterns. Secondly, the time-over-channel features are also important, and could be a natural data augmentation. To address the drawbacks, we propose a novel Two-stream Convolution Augmented Human Activity Transformer (THAT) model. Our model proposes to utilize a two-stream structure to capture both time-over-channel and channel-over-time features, and use the multi-scale convolution augmented transformer to capture range-based patterns. Extensive experiments on four real experiment datasets demonstrate that our model outperforms state-of-the-art models in terms of both effectiveness and efficiency.

Cite

Text

Li et al. "Two-Stream Convolution Augmented Transformer for Human Activity Recognition." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I1.16103

Markdown

[Li et al. "Two-Stream Convolution Augmented Transformer for Human Activity Recognition." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/li2021aaai-two/) doi:10.1609/AAAI.V35I1.16103

BibTeX

@inproceedings{li2021aaai-two,
  title     = {{Two-Stream Convolution Augmented Transformer for Human Activity Recognition}},
  author    = {Li, Bing and Cui, Wei and Wang, Wei and Zhang, Le and Chen, Zhenghua and Wu, Min},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {286-293},
  doi       = {10.1609/AAAI.V35I1.16103},
  url       = {https://mlanthology.org/aaai/2021/li2021aaai-two/}
}