Generalizing Gaze Estimation with Outlier-Guided Collaborative Adaptation
Abstract
Deep neural networks have significantly improved appearance-based gaze estimation accuracy. However, it still suffers from unsatisfactory performance when generalizing the trained model to new domains, e.g., unseen environments or persons. In this paper, we propose a plug-and-play gaze adaptation framework (PnP-GA), which is an ensemble of networks that learn collaboratively with the guidance of outliers. Since our proposed framework does not require ground-truth labels in the target domain, the existing gaze estimation networks can be directly plugged into PnP-GA and generalize the algorithms to new domains. We test PnP-GA on four gaze domain adaptation tasks, ETH-to-MPII, ETH-to-EyeDiap, Gaze360-to-MPII, and Gaze360-to-EyeDiap. The experimental results demonstrate that the PnP-GA framework achieves considerable performance improvements of 36.9%, 31.6%, 19.4%, and 11.8% over the baseline system. The proposed framework also outperforms the state-of-the-art domain adaptation approaches on gaze domain adaptation tasks.
Cite
Text
Liu et al. "Generalizing Gaze Estimation with Outlier-Guided Collaborative Adaptation." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.00381Markdown
[Liu et al. "Generalizing Gaze Estimation with Outlier-Guided Collaborative Adaptation." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/liu2021iccv-generalizing/) doi:10.1109/ICCV48922.2021.00381BibTeX
@inproceedings{liu2021iccv-generalizing,
title = {{Generalizing Gaze Estimation with Outlier-Guided Collaborative Adaptation}},
author = {Liu, Yunfei and Liu, Ruicong and Wang, Haofei and Lu, Feng},
booktitle = {International Conference on Computer Vision},
year = {2021},
pages = {3835-3844},
doi = {10.1109/ICCV48922.2021.00381},
url = {https://mlanthology.org/iccv/2021/liu2021iccv-generalizing/}
}