Robust Domain Adaptation on the L1-Grassmannian Manifold

Abstract

Domain adaptation aims to remedy the loss in classification performance that often occurs due to domain shifts between training and testing datasets. This problem is known as the dataset bias attributed to variations across datasets. Domain adaptation methods on Grassmann manifolds are among the most popular, including Geodesic Subspace Sampling and Geodesic Flow Kernel. Grassmann learning facilitates compact characterization by generating linear subspaces and representing them as points on the manifold. However, Grassmannian construction is based on PCA which is sensitive to outliers. This motivates us to find linear projections that are robust to noise, outliers, and dataset idiosyncrasies. Hence, we combine L1-PCA and Grassmann manifolds to perform robust domain adaptation. We present empirical results to validate improvements and robustness for domain adaptation in object class recognition across datasets.

Cite

Text

Kumar and Savakis. "Robust Domain Adaptation on the L1-Grassmannian Manifold." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2016. doi:10.1109/CVPRW.2016.136

Markdown

[Kumar and Savakis. "Robust Domain Adaptation on the L1-Grassmannian Manifold." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2016.](https://mlanthology.org/cvprw/2016/kumar2016cvprw-robust/) doi:10.1109/CVPRW.2016.136

BibTeX

@inproceedings{kumar2016cvprw-robust,
  title     = {{Robust Domain Adaptation on the L1-Grassmannian Manifold}},
  author    = {Kumar, Sriram and Savakis, Andreas E.},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2016},
  pages     = {1058-1065},
  doi       = {10.1109/CVPRW.2016.136},
  url       = {https://mlanthology.org/cvprw/2016/kumar2016cvprw-robust/}
}