Generative Adversarial Network for Future Hand Segmentation from Egocentric Video
Abstract
We introduce the novel problem of anticipating a time series of future hand masks from egocentric video. A key challenge is to model the stochasticity of future head motions, which globally impact the head-worn camera video analysis. To this end, we propose a novel deep generative model -- EgoGAN, which uses a 3D Fully Convolutional Network to learn a spatio-temporal video representation for pixel-wise visual anticipation, generates future head motion using Generative Adversarial Network (GAN), and then predicts the future hand masks based on video representation and generated future head motion. We evaluate our method on both the EGTEA Gaze+ and the EPIC-Kitchens datasets. We conduct detailed ablation studies to validate the design choices of our approach. Furthermore, we compare our method with previous state-of-the-art methods on future image segmentation and show that our method can more accurately predict future hand masks.
Cite
Text
Jia et al. "Generative Adversarial Network for Future Hand Segmentation from Egocentric Video." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-19778-9_37Markdown
[Jia et al. "Generative Adversarial Network for Future Hand Segmentation from Egocentric Video." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/jia2022eccv-generative/) doi:10.1007/978-3-031-19778-9_37BibTeX
@inproceedings{jia2022eccv-generative,
title = {{Generative Adversarial Network for Future Hand Segmentation from Egocentric Video}},
author = {Jia, Wenqi and Liu, Miao and Rehg, James M.},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2022},
doi = {10.1007/978-3-031-19778-9_37},
url = {https://mlanthology.org/eccv/2022/jia2022eccv-generative/}
}