Beyond Boundaries: A Novel Data-Augmentation Discourse for Open Domain Generalization

Abstract

The problem of Open Domain Generalization (ODG) is multifaceted, encompassing shifts in domains and labels across all source and target domains. Existing approaches have encountered challenges such as style bias towards training domains, insufficient feature-space disentanglement to highlight semantic features, and discriminativeness of the latent space. Additionally, they rely on a confidence-based target outlier detection approach, which can lead to misclassifications when target open samples visually align with the source domain data. In response to these challenges, we present a solution named \textsc{ODG-Net}. We aim to create a direct open-set classifier within a \textit{discriminative}, \textit{unbiased}, and \textit{disentangled} semantic embedding space. To enrich data density and diversity, we introduce a generative augmentation framework that produces \textit{style-interpolated} novel domains for closed-set images and novel pseudo-open images by \textit{interpolating the contents of paired training images}. Our augmentation strategy skillfully utilizes \textit{disentangled style and content information} to synthesize images effectively. Furthermore, we tackle the issue of style bias by representing all images in relation to all source domain properties, which effectively accentuates complementary visual features. Consequently, we train a multi-class semantic object classifier, incorporating both closed and open class classification capabilities, along with a style classifier to identify style primitives. The joint use of style and semantic classifiers facilitates the disentanglement of the latent space, thereby enhancing the generalization performance of the semantic classifier. To ensure discriminativeness in both closed and open spaces, we optimize the semantic feature space using novel metric losses. The experimental results on six benchmark datasets convincingly demonstrate that \textsc{ODG-Net} surpasses the state-of-the-art by an impressive margin of $1-4\%$ in both open and closed-set DG scenarios.

Cite

Text

Bose et al. "Beyond Boundaries: A Novel Data-Augmentation Discourse for Open Domain Generalization." Transactions on Machine Learning Research, 2023.

Markdown

[Bose et al. "Beyond Boundaries: A Novel Data-Augmentation Discourse for Open Domain Generalization." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/bose2023tmlr-beyond/)

BibTeX

@article{bose2023tmlr-beyond,
  title     = {{Beyond Boundaries: A Novel Data-Augmentation Discourse for Open Domain Generalization}},
  author    = {Bose, Shirsha and Jha, Ankit and Kandala, Hitesh and Banerjee, Biplab},
  journal   = {Transactions on Machine Learning Research},
  year      = {2023},
  url       = {https://mlanthology.org/tmlr/2023/bose2023tmlr-beyond/}
}