Aguvis: Unified Pure Vision Agents for Autonomous GUI Interaction

Abstract

Automating GUI tasks remains challenging due to reliance on textual representations, platform-specific action spaces, and limited reasoning capabilities. We introduce Aguvis, a unified vision-based framework for autonomous GUI agents that directly operates on screen images, standardizes cross-platform interactions and incorporates structured reasoning via inner monologue. To enable this, we construct Aguvis data collection, a large-scale dataset with multimodal grounding and reasoning annotations, and develop a two-stage training pipeline that separates GUI grounding from planning and reasoning. Experiments show that Aguvis achieves state-of-the-art performance across offline and real-world online benchmarks, marking the first fully autonomous vision-based GUI agent that operates without closed-source models. We open-source all datasets, models, and training recipes at https://aguvis-project.github.io to advance future research.

Cite

Text

Xu et al. "Aguvis: Unified Pure Vision Agents for Autonomous GUI Interaction." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Xu et al. "Aguvis: Unified Pure Vision Agents for Autonomous GUI Interaction." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/xu2025icml-aguvis/)

BibTeX

@inproceedings{xu2025icml-aguvis,
  title     = {{Aguvis: Unified Pure Vision Agents for Autonomous GUI Interaction}},
  author    = {Xu, Yiheng and Wang, Zekun and Wang, Junli and Lu, Dunjie and Xie, Tianbao and Saha, Amrita and Sahoo, Doyen and Yu, Tao and Xiong, Caiming},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {69772-69805},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/xu2025icml-aguvis/}
}