Jointly Modeling Spatio-Temporal Features of Tactile Signals for Action Classification

Abstract

Tactile signals collected by wearable electronics are essential in modeling and understanding human behavior. One of the main applications of tactile signals is action classification, especially in healthcare and robotics. However, existing tactile classification methods fail to capture the spatial and temporal features of tactile signals simultaneously, which results in sub-optimal performances. In this paper, we design Spatio-Temporal Aware tactility Transformer (STAT) to utilize continuous tactile signals for action classification. We propose spatial and temporal embeddings along with a new temporal pretraining task in our model, which aims to enhance the transformer in modeling the spatio-temporal features of tactile signals. Specially, the designed temporal pretraining task is to differentiate the time order of tubelet inputs to model the temporal properties explicitly. Experimental results on a public action classification dataset demonstrate that our model outperforms state-of-the-art methods in all metrics.

Cite

Text

Lin et al. "Jointly Modeling Spatio-Temporal Features of Tactile Signals for Action Classification." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I12.29288

Markdown

[Lin et al. "Jointly Modeling Spatio-Temporal Features of Tactile Signals for Action Classification." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/lin2024aaai-jointly/) doi:10.1609/AAAI.V38I12.29288

BibTeX

@inproceedings{lin2024aaai-jointly,
  title     = {{Jointly Modeling Spatio-Temporal Features of Tactile Signals for Action Classification}},
  author    = {Lin, Jimmy and Li, Junkai and Gao, Jiasi and Ma, Weizhi and Liu, Yang},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {13817-13825},
  doi       = {10.1609/AAAI.V38I12.29288},
  url       = {https://mlanthology.org/aaai/2024/lin2024aaai-jointly/}
}