Disentangled Embedding Through Style and Mutual Information for Domain Generalization

Abstract

Deep neural networks often experience performance degradation when faced with distributional shifts between training and testing data, a challenge referred to as domain shift. Domain Generalization (DG) addresses this issue by training models on multiple source domains, enabling the development of invariant representations that generalize to unseen distributions. While existing DG methods have achieved success by minimizing variations across source domains within a shared feature space, recent advances inspired by representation disentanglement have demonstrated improved performance by separating latent features into domain-specific and domain-invariant components. We propose two novel frameworks: Disentangled Embedding through Mutual Information (DETMI) and Disentangled Embedding through Style Information (DETSI). DETMI enforces disentanglement by employing a mutual information estimator, minimizing the mutual dependence between domain-agnostic and domain-specific embeddings. DETSI, on the other hand, achieves disentanglement through style extraction and perturbation, facilitating the learning of domain-invariant and domain-specific representations. Extensive experiments on the PACS, Office-Home, and VLCS datasets show that both frameworks outperform several state-of-the-art DG techniques.

Cite

Text

Mehmood and Barner. "Disentangled Embedding Through Style and Mutual Information for Domain Generalization." Transactions on Machine Learning Research, 2025.

Markdown

[Mehmood and Barner. "Disentangled Embedding Through Style and Mutual Information for Domain Generalization." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/mehmood2025tmlr-disentangled/)

BibTeX

@article{mehmood2025tmlr-disentangled,
  title     = {{Disentangled Embedding Through Style and Mutual Information for Domain Generalization}},
  author    = {Mehmood, Noaman and Barner, Kenneth},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/mehmood2025tmlr-disentangled/}
}