VILLS : Video-Image Learning to Learn Semantics for Person Re-Identification
Abstract
Person Re-identification is a research area with significant real world applications. Despite recent progress existing methods face challenges in robust re-identification in the wild e.g. by focusing only on a particular modality and on unreliable patterns such as clothing. A generalized method is highly desired but remains elusive to achieve due to issues such as the trade-off between spatial and temporal resolution and inaccurate feature extraction. We propose VILLS (Video-Image Learning to Learn Semantics) a self-supervised method that jointly learns spatial and temporal features from images and videos. VILLS first designs a local semantic extraction module that adaptively extracts semantically consistent and robust spatial features. Then VILLS designs a unified feature learning and adaptation module to represent image and video modalities in a consistent feature space. By Leveraging self-supervised large-scale pre-training VILLS establishes a new State-of-The-Art that significantly outperforms existing image and video-based methods.
Cite
Text
Huang et al. "VILLS : Video-Image Learning to Learn Semantics for Person Re-Identification." Winter Conference on Applications of Computer Vision, 2025.Markdown
[Huang et al. "VILLS : Video-Image Learning to Learn Semantics for Person Re-Identification." Winter Conference on Applications of Computer Vision, 2025.](https://mlanthology.org/wacv/2025/huang2025wacv-vills/)BibTeX
@inproceedings{huang2025wacv-vills,
title = {{VILLS : Video-Image Learning to Learn Semantics for Person Re-Identification}},
author = {Huang, Siyuan and Kathirvel, Ram Prabhakar and Guo, Yuxiang and Chellappa, Rama and Peng, Cheng},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2025},
pages = {5969-5979},
url = {https://mlanthology.org/wacv/2025/huang2025wacv-vills/}
}