Attention-Driven GUI Grounding: Leveraging Pretrained Multimodal Large Language Models Without Fine-Tuning

Cite

Text

Xu et al. "Attention-Driven GUI Grounding: Leveraging Pretrained Multimodal Large Language Models Without Fine-Tuning." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I8.32957

Markdown

[Xu et al. "Attention-Driven GUI Grounding: Leveraging Pretrained Multimodal Large Language Models Without Fine-Tuning." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/xu2025aaai-attention/) doi:10.1609/AAAI.V39I8.32957

BibTeX

@inproceedings{xu2025aaai-attention,
  title     = {{Attention-Driven GUI Grounding: Leveraging Pretrained Multimodal Large Language Models Without Fine-Tuning}},
  author    = {Xu, Hai-Ming and Chen, Qi and Wang, Lei and Liu, Lingqiao},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {8851-8859},
  doi       = {10.1609/AAAI.V39I8.32957},
  url       = {https://mlanthology.org/aaai/2025/xu2025aaai-attention/}
}