InfantAgent-Next: A Multimodal Generalist Agent for Automated Computer Interaction

Abstract

This paper introduces \textsc{InfantAgent-Next}, a generalist agent capable of interacting with computers in a multimodal manner, encompassing text, images, audio, and video. Unlike existing approaches that either build intricate workflows around a single large model or only provide workflow modularity, our agent integrates tool-based and pure vision agents within a highly modular architecture, enabling different models to collaboratively solve decoupled tasks in a step-by-step manner. Our generality is demonstrated by our ability to evaluate not only pure vision-based real-world benchmarks (i.e., OSWorld), but also more general or tool-intensive benchmarks (e.g., GAIA and SWE-Bench). Specifically, we achieve a $\mathbf{7.27\\%}$ accuracy gain over Claude-Computer-Use on OSWorld. Codes and evaluation scripts are included in the supplementary material and will be released as open-source.

Cite

Text

Lei et al. "InfantAgent-Next: A Multimodal Generalist Agent for Automated Computer Interaction." Advances in Neural Information Processing Systems, 2025.

Markdown

[Lei et al. "InfantAgent-Next: A Multimodal Generalist Agent for Automated Computer Interaction." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/lei2025neurips-infantagentnext/)

BibTeX

@inproceedings{lei2025neurips-infantagentnext,
  title     = {{InfantAgent-Next: A Multimodal Generalist Agent for Automated Computer Interaction}},
  author    = {Lei, Bin and Kang, Weitai and Zhang, Zijian and Chen, Winson and Xie, Xi and Zuo, Shan and Xie, Mimi and Payani, Ali and Hong, Mingyi and Yan, Yan and Ding, Caiwen},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/lei2025neurips-infantagentnext/}
}