Gesture Recognition with Keypoint and Radar Stream Fusion for Automated Vehicles

Abstract

We present a joint camera and radar approach to enable autonomous vehicles to understand and react to human gestures in everyday traffic. Initially, we process the radar data with a PointNet followed by a spatio-temporal multilayer perceptron (stMLP). Independently, the human body pose is extracted from the camera frame and processed with a separate stMLP network. We propose a fusion neural network for both modalities, including an auxiliary loss for each modality. In our experiments with a collected dataset, we show the advantages of gesture recognition with two modalities. Motivated by adverse weather conditions, we also demonstrate promising performance when one of the sensors lacks functionality.

Cite

Text

Holzbock et al. "Gesture Recognition with Keypoint and Radar Stream Fusion for Automated Vehicles." European Conference on Computer Vision Workshops, 2022. doi:10.1007/978-3-031-25056-9_36

Markdown

[Holzbock et al. "Gesture Recognition with Keypoint and Radar Stream Fusion for Automated Vehicles." European Conference on Computer Vision Workshops, 2022.](https://mlanthology.org/eccvw/2022/holzbock2022eccvw-gesture/) doi:10.1007/978-3-031-25056-9_36

BibTeX

@inproceedings{holzbock2022eccvw-gesture,
  title     = {{Gesture Recognition with Keypoint and Radar Stream Fusion for Automated Vehicles}},
  author    = {Holzbock, Adrian and Kern, Nicolai and Waldschmidt, Christian and Dietmayer, Klaus and Belagiannis, Vasileios},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2022},
  pages     = {570-584},
  doi       = {10.1007/978-3-031-25056-9_36},
  url       = {https://mlanthology.org/eccvw/2022/holzbock2022eccvw-gesture/}
}