Learning to Personalize in Appearance-Based Gaze Tracking

Abstract

Personal variations severely limit the performance of appearance-based gaze tracking. Adapting to these variations using standard neural network model-adaption methods is difficult. The problems range from overfitting, due to small amounts of training data, to underfitting, due to restrictive model architectures. We tackle these problems by introducing SPatial Adaptive GaZe Estimator (SPAZE ). By modeling personal variations as a low-dimensional latent parameter space, SPAZE provides just enough adaptability to capture the range of personal variations without being prone to overfitting. Calibrating SPAZE for a new person reduces to solving a small and simple optimization problem. SPAZE achieves an error of 2.70 degrees on the MPIIGaze dataset, improving on the state-of-the-art by 14 %. We contribute to gaze tracking research by empirically showing that personal variations are well-modeled as a 3-dimensional latent parameter space for each eye. We show that this low-dimensionality is expected by examining model-based approaches to gaze tracking.

Cite

Text

Lindén et al. "Learning to Personalize in Appearance-Based Gaze Tracking." IEEE/CVF International Conference on Computer Vision Workshops, 2019. doi:10.1109/ICCVW.2019.00145

Markdown

[Lindén et al. "Learning to Personalize in Appearance-Based Gaze Tracking." IEEE/CVF International Conference on Computer Vision Workshops, 2019.](https://mlanthology.org/iccvw/2019/linden2019iccvw-learning/) doi:10.1109/ICCVW.2019.00145

BibTeX

@inproceedings{linden2019iccvw-learning,
  title     = {{Learning to Personalize in Appearance-Based Gaze Tracking}},
  author    = {Lindén, Erik and Sjöstrand, Jonas and Proutière, Alexandre},
  booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
  year      = {2019},
  pages     = {1140-1148},
  doi       = {10.1109/ICCVW.2019.00145},
  url       = {https://mlanthology.org/iccvw/2019/linden2019iccvw-learning/}
}