Order-Preserving Consistency Regularization for Domain Adaptation and Generalization

Abstract

Deep learning models fail on cross-domain challenges if the model is oversensitive to domain-specific attributes, e.g., lightning, background, camera angle, etc. To alleviate this problem, data augmentation coupled with consistency regularization are commonly adopted to make the model less sensitive to domain-specific attributes. Consistency regularization enforces the model to output the same representation or prediction for two views of one image. These constraints, however, are either too strict or not order-preserving for the classification probabilities. In this work, we propose the Order-preserving Consistency Regularization (OCR) for cross-domain tasks. The order-preserving property for the prediction makes the model robust to task-irrelevant transformations. As a result, the model becomes less sensitive to the domain-specific attributes. The comprehensive experiments show that our method achieves clear advantages on five different cross-domain tasks.

Cite

Text

Jing et al. "Order-Preserving Consistency Regularization for Domain Adaptation and Generalization." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.01734

Markdown

[Jing et al. "Order-Preserving Consistency Regularization for Domain Adaptation and Generalization." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/jing2023iccv-orderpreserving/) doi:10.1109/ICCV51070.2023.01734

BibTeX

@inproceedings{jing2023iccv-orderpreserving,
  title     = {{Order-Preserving Consistency Regularization for Domain Adaptation and Generalization}},
  author    = {Jing, Mengmeng and Zhen, Xiantong and Li, Jingjing and Snoek, Cees G. M.},
  booktitle = {International Conference on Computer Vision},
  year      = {2023},
  pages     = {18916-18927},
  doi       = {10.1109/ICCV51070.2023.01734},
  url       = {https://mlanthology.org/iccv/2023/jing2023iccv-orderpreserving/}
}