Human-Vehicle Cooperative Visual Perception for Autonomous Driving Under Complex Traffic Environments

Abstract

Human-vehicle cooperative driving has become one of the critical stages to achieve a higher level of driving automation. For an autonomous driving system, the complex traffic environments bring great challenges to its visual perception tasks. Based on the gaze points of human drivers and the images detected from a semi-automated vehicle, this work proposes a framework to fuse their visual characteristics based on the Laplacian Pyramid algorithm. By adopting Extended Kalman Filter, we improve the detection accuracy of objects with interaction risk. This work also reveals that the cooperative visual perception framework can predict the trajectory of objects with interaction risk better than simple object detection algorithms. The findings can be applied in improving visual perception ability and making proper decisions and control for autonomous vehicles.

Cite

Text

Zhao et al. "Human-Vehicle Cooperative Visual Perception for Autonomous Driving Under Complex Traffic Environments." European Conference on Computer Vision Workshops, 2022. doi:10.1007/978-3-031-25056-9_41

Markdown

[Zhao et al. "Human-Vehicle Cooperative Visual Perception for Autonomous Driving Under Complex Traffic Environments." European Conference on Computer Vision Workshops, 2022.](https://mlanthology.org/eccvw/2022/zhao2022eccvw-humanvehicle/) doi:10.1007/978-3-031-25056-9_41

BibTeX

@inproceedings{zhao2022eccvw-humanvehicle,
  title     = {{Human-Vehicle Cooperative Visual Perception for Autonomous Driving Under Complex Traffic Environments}},
  author    = {Zhao, Yiyue and Lei, Cailin and Shen, Yu and Du, Yuchuan and Chen, Qijun},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2022},
  pages     = {646-662},
  doi       = {10.1007/978-3-031-25056-9_41},
  url       = {https://mlanthology.org/eccvw/2022/zhao2022eccvw-humanvehicle/}
}