Multi-Level Domain Adaptation for Lane Detection

Abstract

We focus on bridging domain discrepancy in lane detection among different scenarios to greatly reduce extra annotation and re-training costs for autonomous driving. Critical factors hinder the performance improvement of cross-domain lane detection that conventional methods only focus on pixel-wise loss while ignoring shape and position priors of lanes. To address the issue, we propose the Multi-level Domain Adaptation (MLDA) framework, a new perspective to handle cross-domain lane detection at three complementary semantic levels of pixel, instance and category. Specifically, at pixel level, we propose to apply cross-class confidence constraints in self-training to tackle the imbalanced confidence distribution of lane and background. At instance level, we go beyond pixels to treat segmented lanes as instances and facilitate discriminative features in target domain with triplet learning, which effectively rebuilds the semantic context of lanes and contributes to alleviating the feature confusion. At category level, we propose an adaptive inter-domain embedding module to utilize the position prior of lanes during adaptation. In two challenging datasets, i.e. TuSimple and CULane, our approach improves lane detection performance by a large margin with gains of 8.8% on accuracy and 7.4% on F1-score respectively, compared with state-of-the-art domain adaptation algorithms.

Cite

Text

Li et al. "Multi-Level Domain Adaptation for Lane Detection." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022. doi:10.1109/CVPRW56347.2022.00484

Markdown

[Li et al. "Multi-Level Domain Adaptation for Lane Detection." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022.](https://mlanthology.org/cvprw/2022/li2022cvprw-multilevel/) doi:10.1109/CVPRW56347.2022.00484

BibTeX

@inproceedings{li2022cvprw-multilevel,
  title     = {{Multi-Level Domain Adaptation for Lane Detection}},
  author    = {Li, Chenguang and Zhang, Boheng and Shi, Jia and Cheng, Guangliang},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2022},
  pages     = {4379-4388},
  doi       = {10.1109/CVPRW56347.2022.00484},
  url       = {https://mlanthology.org/cvprw/2022/li2022cvprw-multilevel/}
}