Semantic Self-Adaptation: Enhancing Generalization with a Single Sample

Abstract

The lack of out-of-domain generalization is a critical weakness of deep networks for semantic segmentation. Previous studies relied on the assumption of a static model, i. e., once the training process is complete, model parameters remain fixed at test time. In this work, we challenge this premise with a self-adaptive approach for semantic segmentation that adjusts the inference process to each input sample. Self-adaptation operates on two levels. First, it fine-tunes the parameters of convolutional layers to the input image using consistency regularization. Second, in Batch Normalization layers, self-adaptation interpolates between the training and the reference distribution derived from a single test sample. Despite both techniques being well known in the literature, their combination sets new state-of-the-art accuracy on synthetic-to-real generalization benchmarks. Our empirical study suggests that self-adaptation may complement the established practice of model regularization at training time for improving deep network generalization to out-of-domain data. Our code and pre-trained models are available at https://github.com/visinf/self-adaptive.

Cite

Text

Bahmani et al. "Semantic Self-Adaptation: Enhancing Generalization with a Single Sample." Transactions on Machine Learning Research, 2023.

Markdown

[Bahmani et al. "Semantic Self-Adaptation: Enhancing Generalization with a Single Sample." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/bahmani2023tmlr-semantic/)

BibTeX

@article{bahmani2023tmlr-semantic,
  title     = {{Semantic Self-Adaptation: Enhancing Generalization with a Single Sample}},
  author    = {Bahmani, Sherwin and Hahn, Oliver and Zamfir, Eduard and Araslanov, Nikita and Cremers, Daniel and Roth, Stefan},
  journal   = {Transactions on Machine Learning Research},
  year      = {2023},
  url       = {https://mlanthology.org/tmlr/2023/bahmani2023tmlr-semantic/}
}