End-to-End Deep Structured Models for Drawing Crosswalks
Abstract
In this paper we address the problem of detecting crosswalks from LiDAR and camera imagery. Towards this goal, given multiple LiDAR sweeps and the corresponding imagery, we project both inputs onto the ground surface to produce a top down view of the scene. We then leverage convolutional neural networks to extract semantic cues about the location of the crosswalks. These are then used in combination with road centerlines from freely available maps (e.g., OpenStreetMaps) to solve a structured optimization problem which draws the final crosswalk boundaries. Our experiments over crosswalks in a large city area show that 96.6% automation can be achieved.
Cite
Text
Liang and Urtasun. "End-to-End Deep Structured Models for Drawing Crosswalks." Proceedings of the European Conference on Computer Vision (ECCV), 2018. doi:10.1007/978-3-030-01258-8_25Markdown
[Liang and Urtasun. "End-to-End Deep Structured Models for Drawing Crosswalks." Proceedings of the European Conference on Computer Vision (ECCV), 2018.](https://mlanthology.org/eccv/2018/liang2018eccv-endtoend/) doi:10.1007/978-3-030-01258-8_25BibTeX
@inproceedings{liang2018eccv-endtoend,
title = {{End-to-End Deep Structured Models for Drawing Crosswalks}},
author = {Liang, Justin and Urtasun, Raquel},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2018},
doi = {10.1007/978-3-030-01258-8_25},
url = {https://mlanthology.org/eccv/2018/liang2018eccv-endtoend/}
}