Multi-View Gaze Target Estimation

Abstract

This paper presents a method that utilizes multiple camera views for the gaze target estimation (GTE) task. The approach integrates information from different camera views to improve accuracy and expand applicability, addressing limitations in existing single-view methods that face challenges such as face occlusion, target ambiguity, and out-of-view targets. Our method processes a pair of camera views as input, incorporating a Head Information Aggregation (HIA) module for leveraging head information from both views for more accurate gaze estimation, an Uncertainty-based Gaze Selection (UGS) for identifying the most reliable gaze output, and an Epipolar-based Scene Attention (ESA) module for cross-view background information sharing. This approach significantly outperforms single-view baselines, especially when the second camera provides a clear view of the person's face. Additionally, our method can estimate the gaze target in the first view using the image of the person in the second view only, a capability not possessed by single-view GTE methods. Furthermore, the paper introduces a multi-view dataset for developing and evaluating multi-view GTE methods. Data and code are available.

Cite

Text

Miao et al. "Multi-View Gaze Target Estimation." International Conference on Computer Vision, 2025.

Markdown

[Miao et al. "Multi-View Gaze Target Estimation." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/miao2025iccv-multiview/)

BibTeX

@inproceedings{miao2025iccv-multiview,
  title     = {{Multi-View Gaze Target Estimation}},
  author    = {Miao, Qiaomu and Golani, Vivek Raju and Xu, Jingyi and Dutta, Progga Paromita and Hoai, Minh and Samaras, Dimitris},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {5371-5381},
  url       = {https://mlanthology.org/iccv/2025/miao2025iccv-multiview/}
}