No More Discrimination: Cross City Adaptation of Road Scene Segmenters

Abstract

Despite the recent success of deep-learning based semantic segmentation, deploying a pre-trained road scene segmenter to a city whose images are not presented in the training set would not achieve satisfactory performance due to dataset biases. Instead of collecting a large number of annotated images of each city of interest to train or refine the segmenter, we propose an unsupervised learning approach to adapt road scene segmenters across different cities. By utilizing Google Street View and its time-machine feature, we can collect unannotated images for each road scene at different times, so that the associated static-object priors can be extracted accordingly. By advancing a joint global and class-specific domain adversarial learning framework, adaptation of pre-trained segmenters to that city can be achieved without the need of any user annotation or interaction. We show that our method improves the performance of semantic segmentation in multiple cities across continents, while it performs favorably against state-of-the-art approaches requiring annotated training data.

Cite

Text

Chen et al. "No More Discrimination: Cross City Adaptation of Road Scene Segmenters." International Conference on Computer Vision, 2017. doi:10.1109/ICCV.2017.220

Markdown

[Chen et al. "No More Discrimination: Cross City Adaptation of Road Scene Segmenters." International Conference on Computer Vision, 2017.](https://mlanthology.org/iccv/2017/chen2017iccv-more/) doi:10.1109/ICCV.2017.220

BibTeX

@inproceedings{chen2017iccv-more,
  title     = {{No More Discrimination: Cross City Adaptation of Road Scene Segmenters}},
  author    = {Chen, Yi-Hsin and Chen, Wei-Yu and Chen, Yu-Ting and Tsai, Bo-Cheng and Wang, Yu-Chiang Frank and Sun, Min},
  booktitle = {International Conference on Computer Vision},
  year      = {2017},
  doi       = {10.1109/ICCV.2017.220},
  url       = {https://mlanthology.org/iccv/2017/chen2017iccv-more/}
}