CTIN: Robust Contextual Transformer Network for Inertial Navigation

Abstract

Recently, data-driven inertial navigation approaches have demonstrated their capability of using well-trained neural networks to obtain accurate position estimates from inertial measurement units (IMUs) measurements. In this paper, we propose a novel robust Contextual Transformer-based network for Inertial Navigation (CTIN) to accurately predict velocity and trajectory. To this end, we first design a ResNet-based encoder enhanced by local and global multi-head self-attention to capture spatial contextual information from IMU measurements. Then we fuse these spatial representations with temporal knowledge by leveraging multi-head attention in the Transformer decoder. Finally, multi-task learning with uncertainty reduction is leveraged to improve learning efficiency and prediction accuracy of velocity and trajectory. Through extensive experiments over a wide range of inertial datasets (e.g., RIDI, OxIOD, RoNIN, IDOL, and our own), CTIN is very robust and outperforms state-of-the-art models.

Cite

Text

Rao et al. "CTIN: Robust Contextual Transformer Network for Inertial Navigation." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I5.20479

Markdown

[Rao et al. "CTIN: Robust Contextual Transformer Network for Inertial Navigation." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/rao2022aaai-ctin/) doi:10.1609/AAAI.V36I5.20479

BibTeX

@inproceedings{rao2022aaai-ctin,
  title     = {{CTIN: Robust Contextual Transformer Network for Inertial Navigation}},
  author    = {Rao, Bingbing and Kazemi, Ehsan and Ding, Yifan and Shila, Devu M. and Tucker, Frank M. and Wang, Liqiang},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {5413-5421},
  doi       = {10.1609/AAAI.V36I5.20479},
  url       = {https://mlanthology.org/aaai/2022/rao2022aaai-ctin/}
}