Guided Attentive Feature Fusion for Multispectral Pedestrian Detection
Abstract
Multispectral image pairs can provide complementary visual information, making pedestrian detection systems more robust and reliable. To benefit from both RGB and thermal IR modalities, we introduce a novel attentive multispectral feature fusion approach. Under the guidance of the inter- and intra-modality attention modules, our deep learning architecture learns to dynamically weigh and fuse the multispectral features. Experiments on two public multispectral object detection datasets demonstrate that the proposed approach significantly improves the detection accuracy at a low computation cost.
Cite
Text
Zhang et al. "Guided Attentive Feature Fusion for Multispectral Pedestrian Detection." Winter Conference on Applications of Computer Vision, 2021.Markdown
[Zhang et al. "Guided Attentive Feature Fusion for Multispectral Pedestrian Detection." Winter Conference on Applications of Computer Vision, 2021.](https://mlanthology.org/wacv/2021/zhang2021wacv-guided/)BibTeX
@inproceedings{zhang2021wacv-guided,
title = {{Guided Attentive Feature Fusion for Multispectral Pedestrian Detection}},
author = {Zhang, Heng and Fromont, Elisa and Lefevre, Sebastien and Avignon, Bruno},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2021},
pages = {72-80},
url = {https://mlanthology.org/wacv/2021/zhang2021wacv-guided/}
}