Domain Generalization with Global Sample Mixup

Abstract

Deep models have demonstrated outstanding ability in various computer vision tasks but are also notoriously known to generalize poorly when encountering unseen domains with different statistics. To alleviate this issue, in this technical report we present a new domain generalization method based on training sample mixup. The main enabling factor of our superior performance lies in the global mixup strategy across the source domains, where the batched samples from multiple graphic devices are mixed up for a better generalization ability. Since the domain gap in NICO datasets is mainly due to the intertwined background bias, the global mix strategy decreases such gap to a great extent by producing abundant mixed backgrounds. Besides, we have conducted extensive experiments on different backbones combined with various data augmentation to study the generalization performance of different model structures. Our final ensembled model achieved 74.07% on the test set and took the 3 rd place according to the image classification accuracy (Acc.) in NICO Common Context Generalization Challenge 2022.

Cite

Text

Lu et al. "Domain Generalization with Global Sample Mixup." European Conference on Computer Vision Workshops, 2022. doi:10.1007/978-3-031-25075-0_35

Markdown

[Lu et al. "Domain Generalization with Global Sample Mixup." European Conference on Computer Vision Workshops, 2022.](https://mlanthology.org/eccvw/2022/lu2022eccvw-domain/) doi:10.1007/978-3-031-25075-0_35

BibTeX

@inproceedings{lu2022eccvw-domain,
  title     = {{Domain Generalization with Global Sample Mixup}},
  author    = {Lu, Yulei and Luo, Yawei and Pan, Antao and Mao, Yangjun and Xiao, Jun},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2022},
  pages     = {518-529},
  doi       = {10.1007/978-3-031-25075-0_35},
  url       = {https://mlanthology.org/eccvw/2022/lu2022eccvw-domain/}
}