Are Visual-Language Models Effective in Action Recognition? a Comparative Study
Abstract
Current vision-language foundation models, such as CLIP, have recently shown significant improvement in performance across various downstream tasks. However, whether such foundation models significantly improve more complex fine-grained action recognition tasks is still an open question. To answer this question and better find out the future research direction on human behavior analysis in-the-wild, this paper provides a large-scale study and insight on current state-of-the-art vision foundation models by comparing their transfer ability onto zero-shot and frame-wise action recognition tasks. Extensive experiments are conducted on recent fine-grained, human-centric action recognition datasets ( e.g. , Toyota Smarthome, Penn Action, UAV-Human, TSU, Charades) including action classification and segmentation.
Cite
Text
Ali et al. "Are Visual-Language Models Effective in Action Recognition? a Comparative Study." European Conference on Computer Vision Workshops, 2024. doi:10.1007/978-3-031-91581-9_4Markdown
[Ali et al. "Are Visual-Language Models Effective in Action Recognition? a Comparative Study." European Conference on Computer Vision Workshops, 2024.](https://mlanthology.org/eccvw/2024/ali2024eccvw-visuallanguage/) doi:10.1007/978-3-031-91581-9_4BibTeX
@inproceedings{ali2024eccvw-visuallanguage,
title = {{Are Visual-Language Models Effective in Action Recognition? a Comparative Study}},
author = {Ali, Mahmoud and Yang, Di and Brémond, François},
booktitle = {European Conference on Computer Vision Workshops},
year = {2024},
pages = {46-59},
doi = {10.1007/978-3-031-91581-9_4},
url = {https://mlanthology.org/eccvw/2024/ali2024eccvw-visuallanguage/}
}