Procedure Planning in Instructional Videos

Abstract

In this paper, we study the problem of procedure planning in instructional videos, which can be seen as the first step towards enabling autonomous agents to plan for complex tasks in everyday settings such as cooking. Given the current visual observation of the world and a visual goal, we ask the question ""What actions need to be taken in order to achieve the goal?"". The key technical challenge is how to learn structured and plannable state and action spaces directly from unstructured real videos. We address this challenge by proposing Dual Dynamics Networks (DDN), a framework that explicitly leverages the structured priors imposed by the conjugate relationships between states and actions in a learned plannable latent space. We evaluate our method on real-world instructional videos. Our experiments show that DDN learns plannable representations that lead to better planning performance compared to existing planning approaches and neural network policies.

Cite

Text

Chang et al. "Procedure Planning in Instructional Videos." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58621-8_20

Markdown

[Chang et al. "Procedure Planning in Instructional Videos." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/chang2020eccv-procedure/) doi:10.1007/978-3-030-58621-8_20

BibTeX

@inproceedings{chang2020eccv-procedure,
  title     = {{Procedure Planning in Instructional Videos}},
  author    = {Chang, Chien-Yi and Huang, De-An and Xu, Danfei and Adeli, Ehsan and Fei-Fei, Li and Niebles, Juan Carlos},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2020},
  doi       = {10.1007/978-3-030-58621-8_20},
  url       = {https://mlanthology.org/eccv/2020/chang2020eccv-procedure/}
}