Multimodal Subtask Graph Generation from Instructional Videos

Abstract

Real-world tasks consist of multiple inter-dependent subtasks (e.g., a dirty pan needs to be washed before cooking). In this work, we aim to model the causal dependencies between such subtasks from instructional videos describing the task. This is a challenging problem since complete information about the world is often inaccessible from videos, which demands robust learning mechanisms to understand the causal structure of events. We present Multimodal Subtask Graph Generation (MSG$^2$), an approach that constructs a Subtask Graph defining the dependency between a task’s subtasks relevant to a task from noisy web videos. Graphs generated by our multimodal approach are closer to human-annotated graphs compared to prior approaches. MSG$^2$ further performs the downstream task of next subtask prediction 85% and 30% more accurately than recent video transformer models in the ProceL and CrossTask datasets, respectively.

Cite

Text

Jang et al. "Multimodal Subtask Graph Generation from Instructional Videos." ICLR 2023 Workshops: MRL, 2023.

Markdown

[Jang et al. "Multimodal Subtask Graph Generation from Instructional Videos." ICLR 2023 Workshops: MRL, 2023.](https://mlanthology.org/iclrw/2023/jang2023iclrw-multimodal/)

BibTeX

@inproceedings{jang2023iclrw-multimodal,
  title     = {{Multimodal Subtask Graph Generation from Instructional Videos}},
  author    = {Jang, Yunseok and Sohn, Sungryull and Logeswaran, Lajanugen and Luo, Tiange and Lee, Moontae and Lee, Honglak},
  booktitle = {ICLR 2023 Workshops: MRL},
  year      = {2023},
  url       = {https://mlanthology.org/iclrw/2023/jang2023iclrw-multimodal/}
}