Semantic Terrain Classification for Off-Road Autonomous Driving

Abstract

Producing dense and accurate traversability maps is crucial for autonomous off-road navigation. In this paper, we focus on the problem of classifying terrains into 4 cost classes (free, low-cost, medium-cost, obstacle) for traversability assessment. This requires a robot to reason about both semantics (what objects are present?) and geometric properties (where are the objects located?) of the environment. To achieve this goal, we develop a novel Bird’s Eye View Network (BEVNet), a deep neural network that directly predicts a local map encoding terrain classes from sparse LiDAR inputs. BEVNet processes both geometric and semantic information in a temporally consistent fashion. More importantly, it uses learned prior and history to predict terrain classes in unseen space and into the future, allowing a robot to better appraise its situation. We quantitatively evaluate BEVNet on both on-road and off-road scenarios and show that it outperforms a variety of strong baselines.

Cite

Text

Shaban et al. "Semantic Terrain Classification for Off-Road Autonomous Driving." Conference on Robot Learning, 2021.

Markdown

[Shaban et al. "Semantic Terrain Classification for Off-Road Autonomous Driving." Conference on Robot Learning, 2021.](https://mlanthology.org/corl/2021/shaban2021corl-semantic/)

BibTeX

@inproceedings{shaban2021corl-semantic,
  title     = {{Semantic Terrain Classification for Off-Road Autonomous Driving}},
  author    = {Shaban, Amirreza and Meng, Xiangyun and Lee, JoonHo and Boots, Byron and Fox, Dieter},
  booktitle = {Conference on Robot Learning},
  year      = {2021},
  pages     = {619-629},
  volume    = {164},
  url       = {https://mlanthology.org/corl/2021/shaban2021corl-semantic/}
}