Ego-Exo: Transferring Visual Representations from Third-Person to First-Person Videos

Abstract

We introduce an approach for pre-training egocentric video models using large-scale third-person video datasets. Learning from purely egocentric data is limited by low dataset scale and diversity, while using purely exocentric (third-person) data introduces a large domain mismatch. Our idea is to discover latent signals in third-person video that are predictive of key egocentric-specific properties. Incorporating these signals as knowledge distillation losses during pre-training results in models that benefit from both the scale and diversity of third-person video data, as well as representations that capture salient egocentric properties. Our experiments show that our Ego-Exo framework can be seamlessly integrated into standard video models; it outperforms all baselines when fine-tuned for egocentric activity recognition, achieving state-of-the-art results on Charades-Ego and EPIC-Kitchens-100.

Cite

Text

Li et al. "Ego-Exo: Transferring Visual Representations from Third-Person to First-Person Videos." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.00687

Markdown

[Li et al. "Ego-Exo: Transferring Visual Representations from Third-Person to First-Person Videos." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/li2021cvpr-egoexo/) doi:10.1109/CVPR46437.2021.00687

BibTeX

@inproceedings{li2021cvpr-egoexo,
  title     = {{Ego-Exo: Transferring Visual Representations from Third-Person to First-Person Videos}},
  author    = {Li, Yanghao and Nagarajan, Tushar and Xiong, Bo and Grauman, Kristen},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2021},
  pages     = {6943-6953},
  doi       = {10.1109/CVPR46437.2021.00687},
  url       = {https://mlanthology.org/cvpr/2021/li2021cvpr-egoexo/}
}