Lightweight Neural App Control
Abstract
This paper introduces a novel mobile phone control architecture, termed "app agents", for efficient interactions and controls across various Android apps. The proposed Lightweight Multi-modal App Control (LiMAC) takes as input a textual goal and a sequence of past mobile observations, such as screenshots and corresponding UI trees, to generate precise actions. To address the computational constraints inherent to smartphones, within LiMAC, we introduce a small Action Transformer (AcT) integrated with a fine-tuned vision-language model (VLM) for real-time decision-making and task execution. We evaluate LiMAC on two open-source mobile control datasets, demonstrating the superior performance of our small-form-factor approach against fine-tuned versions of open-source VLMs, such as Florence2 and Qwen2-VL. It also significantly outperforms prompt engineering baselines utilising closed-source foundation models like GPT-4o. More specifically, LiMAC increases the overall action accuracy by up to 19% compared to fine-tuned VLMs, and up to 42% compared to prompt-engineering baselines.
Cite
Text
Christianos et al. "Lightweight Neural App Control." NeurIPS 2024 Workshops: OWA, 2024.Markdown
[Christianos et al. "Lightweight Neural App Control." NeurIPS 2024 Workshops: OWA, 2024.](https://mlanthology.org/neuripsw/2024/christianos2024neuripsw-lightweight/)BibTeX
@inproceedings{christianos2024neuripsw-lightweight,
title = {{Lightweight Neural App Control}},
author = {Christianos, Filippos and Papoudakis, Georgios and Coste, Thomas and Hao, Jianye and Wang, Jun and Shao, Kun},
booktitle = {NeurIPS 2024 Workshops: OWA},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/christianos2024neuripsw-lightweight/}
}