Domain Generalization via Invariant Feature Representation
Abstract
This paper investigates domain generalization: How to take knowledge acquired from an arbitrary number of related domains and apply it to previously unseen domains? We propose Domain-Invariant Component Analysis (DICA), a kernel-based optimization algorithm that learns an invariant transformation by minimizing the dissimilarity across domains, whilst preserving the functional relationship between input and output variables. A learning-theoretic analysis shows that reducing dissimilarity improves the expected generalization ability of classifiers on new domains, motivating the proposed algorithm. Experimental results on synthetic and real-world datasets demonstrate that DICA successfully learns invariant features and improves classifier performance in practice.
Cite
Text
Muandet et al. "Domain Generalization via Invariant Feature Representation." International Conference on Machine Learning, 2013.Markdown
[Muandet et al. "Domain Generalization via Invariant Feature Representation." International Conference on Machine Learning, 2013.](https://mlanthology.org/icml/2013/muandet2013icml-domain/)BibTeX
@inproceedings{muandet2013icml-domain,
title = {{Domain Generalization via Invariant Feature Representation}},
author = {Muandet, Krikamol and Balduzzi, David and Schölkopf, Bernhard},
booktitle = {International Conference on Machine Learning},
year = {2013},
pages = {10-18},
volume = {28},
url = {https://mlanthology.org/icml/2013/muandet2013icml-domain/}
}