VisDA: A Synthetic-to-Real Benchmark for Visual Domain Adaptation

Abstract

The success of machine learning methods on visual recognition tasks is highly dependent on access to large labeled datasets. However, real training images are expensive to collect and annotate for both computer vision and robotic applications. The synthetic images are easy to generate but model performance often drops significantly on data from a new deployment domain, a problem known as dataset shift, or dataset bias. Changes in the visual domain can include lighting, camera pose and background variation, as well as general changes in how the image data is collected. While this problem has been studied extensively in the domain adaptation literature, progress has been limited by the lack of large-scale challenge benchmarks.

Cite

Text

Peng et al. "VisDA: A Synthetic-to-Real Benchmark for Visual Domain Adaptation." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2018. doi:10.1109/CVPRW.2018.00271

Markdown

[Peng et al. "VisDA: A Synthetic-to-Real Benchmark for Visual Domain Adaptation." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2018.](https://mlanthology.org/cvprw/2018/peng2018cvprw-visda/) doi:10.1109/CVPRW.2018.00271

BibTeX

@inproceedings{peng2018cvprw-visda,
  title     = {{VisDA: A Synthetic-to-Real Benchmark for Visual Domain Adaptation}},
  author    = {Peng, Xingchao and Usman, Ben and Kaushik, Neela and Wang, Dequan and Hoffman, Judy and Saenko, Kate},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2018},
  pages     = {2021-2026},
  doi       = {10.1109/CVPRW.2018.00271},
  url       = {https://mlanthology.org/cvprw/2018/peng2018cvprw-visda/}
}