A Dataset for Lane Instance Segmentation in Urban Environments

Abstract

Autonomous vehicles require knowledge of the surrounding road layout, which can be predicted by state-of-the-art CNNs. This work addresses the current lack of data for determining lane instances, which are needed for various driving manoeuvres. The main issue is the time-consuming manual labelling process, typically applied per image. We notice that driving the car is itself a form of annotation. Therefore, we propose a semi-automated method that allows for efficient labelling of image sequences by utilising an estimated road plane in 3D based on where the car has driven and projecting labels from this plane into all images of the sequence. The average labelling time per image is reduced to 5 seconds and only an inexpensive dash-cam is required for data capture. We are releasing a dataset of 24,000 images and additionally show experimental semantic segmentation and instance segmentation results.

Cite

Text

Roberts et al. "A Dataset for Lane Instance Segmentation in Urban Environments." Proceedings of the European Conference on Computer Vision (ECCV), 2018. doi:10.1007/978-3-030-01237-3_33

Markdown

[Roberts et al. "A Dataset for Lane Instance Segmentation in Urban Environments." Proceedings of the European Conference on Computer Vision (ECCV), 2018.](https://mlanthology.org/eccv/2018/roberts2018eccv-dataset/) doi:10.1007/978-3-030-01237-3_33

BibTeX

@inproceedings{roberts2018eccv-dataset,
  title     = {{A Dataset for Lane Instance Segmentation in Urban Environments}},
  author    = {Roberts, Brook and Kaltwang, Sebastian and Samangooei, Sina and Pender-Bare, Mark and Tertikas, Konstantinos and Redford, John},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2018},
  doi       = {10.1007/978-3-030-01237-3_33},
  url       = {https://mlanthology.org/eccv/2018/roberts2018eccv-dataset/}
}