STEPs: Self-Supervised Key Step Extraction and Localization from Unlabeled Procedural Videos

Abstract

We address the problem of extracting key steps from unlabeled procedural videos, motivated by the potential of Augmented Reality (AR) headsets to revolutionize job training and performance. We decompose the problem into two steps: representation learning and key steps extraction. We propose a training objective, Bootstrapped Multi-Cue Contrastive (BMC2) loss to learn discriminative representations for various steps without any labels. Different from prior works, we develop techniques to train a light-weight temporal module which uses off-the-shelf features for self supervision. Our approach can seamlessly leverage information from multiple cues like optical flow, depth or gaze to learn discriminative features for key-steps, making it amenable for AR applications. We finally extract key steps via a tunable algorithm that clusters the representations and samples. We show significant improvements over prior works for the task of key step localization and phase classification. Qualitative results demonstrate that the extracted key steps are meaningful and succinctly represent various steps of the procedural tasks.

Cite

Text

Shah et al. "STEPs: Self-Supervised Key Step Extraction and Localization from Unlabeled Procedural Videos." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.00952

Markdown

[Shah et al. "STEPs: Self-Supervised Key Step Extraction and Localization from Unlabeled Procedural Videos." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/shah2023iccv-steps/) doi:10.1109/ICCV51070.2023.00952

BibTeX

@inproceedings{shah2023iccv-steps,
  title     = {{STEPs: Self-Supervised Key Step Extraction and Localization from Unlabeled Procedural Videos}},
  author    = {Shah, Anshul and Lundell, Benjamin and Sawhney, Harpreet and Chellappa, Rama},
  booktitle = {International Conference on Computer Vision},
  year      = {2023},
  pages     = {10375-10387},
  doi       = {10.1109/ICCV51070.2023.00952},
  url       = {https://mlanthology.org/iccv/2023/shah2023iccv-steps/}
}