VideoGUI: A Benchmark for GUI Automation from Instructional Videos
Abstract
Graphical User Interface (GUI) automation holds significant promise for enhancing human productivity by assisting with computer tasks. Existing task formulations primarily focus on simple tasks that can be specified by a single, language-only instruction, such as “Insert a new slide.” In this work, we introduce VideoGUI, a novel multi-modal benchmark designed to evaluate GUI assistants on visual-centric GUI tasks. Sourced from high-quality web instructional videos, our benchmark focuses on tasks involving professional and novel software (e.g., Adobe Pho- toshop or Stable Diffusion WebUI) and complex activities (e.g., video editing). VideoGUI evaluates GUI assistants through a hierarchical process, allowing for identification of the specific levels at which they may fail: (i) high-level planning: reconstruct procedural subtasks from visual conditions without language descrip- tions; (ii) middle-level planning: generate sequences of precise action narrations based on visual state (i.e., screenshot) and goals; (iii) atomic action execution: perform specific actions such as accurately clicking designated elements. For each level, we design evaluation metrics across individual dimensions to provide clear signals, such as individual performance in clicking, dragging, typing, and scrolling for atomic action execution. Our evaluation on VideoGUI reveals that even the SoTA large multimodal model GPT4o performs poorly on visual-centric GUI tasks, especially for high-level planning. The data and code are available at https://github.com/showlab/videogui.
Cite
Text
Lin et al. "VideoGUI: A Benchmark for GUI Automation from Instructional Videos." Neural Information Processing Systems, 2024. doi:10.52202/079017-2214Markdown
[Lin et al. "VideoGUI: A Benchmark for GUI Automation from Instructional Videos." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/lin2024neurips-videogui/) doi:10.52202/079017-2214BibTeX
@inproceedings{lin2024neurips-videogui,
title = {{VideoGUI: A Benchmark for GUI Automation from Instructional Videos}},
author = {Lin, Kevin Qinghong and Li, Linjie and Gao, Difei and Wu, Qinchen and Yan, Mingyi and Yang, Zhengyuan and Wang, Lijuan and Shou, Mike Zheng},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-2214},
url = {https://mlanthology.org/neurips/2024/lin2024neurips-videogui/}
}