WalkVLM: Aid Visually Impaired People Walking by Vision Language Model

Abstract

Approximately 200 million individuals around the world suffer from varying degrees of visual impairment, making it crucial to leverage AI technology to offer walking assistance for these people.With the recent progress of vision-language models (VLMs), applying VLMs to offer walking guidance has become popular. However, the existing methods of walking guidance are mainly based on self-curated question-answering datasets that are not publicly accessible, without a standardized benchmark for training or evaluation. Moreover, walking assistance often requires real-time streaming video analysis and the generation of concise yet informative reminders, making VLMs struggle due to excessive responses and low efficiency in inferences. In this paper, we introduce the first large-scale dataset dedicated to walking assistance, comprising 12,000 video-annotation pairs, to provide a unified benchmark for training and evaluating systems to help visually-impaired individuals walk. Furthermore, a WalkVLM model is proposed, which employs chain of thought for hierarchical planning to generate concise but informative reminders and utilizes temporal-aware adaptive prediction to reduce the temporal redundancy of reminders. Finally, we have established a solid benchmark for blind walking task and verified the advantages of WalkVLM in stream video processing for this task compared to other VLMs.

Cite

Text

Yuan et al. "WalkVLM: Aid Visually Impaired People Walking by Vision Language Model." International Conference on Computer Vision, 2025.

Markdown

[Yuan et al. "WalkVLM: Aid Visually Impaired People Walking by Vision Language Model." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/yuan2025iccv-walkvlm/)

BibTeX

@inproceedings{yuan2025iccv-walkvlm,
  title     = {{WalkVLM: Aid Visually Impaired People Walking by Vision Language Model}},
  author    = {Yuan, Zhiqiang and Zhang, Ting and Zhu, Yeshuang and Zhang, Jiapei and Deng, Ying and Jia, Zexi and Luo, Peixiang and Duan, Xiaoyue and Zhou, Jie and Zhang, Jinchao},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {9845-9854},
  url       = {https://mlanthology.org/iccv/2025/yuan2025iccv-walkvlm/}
}