One Model for ALL: Low-Level Task Interaction Is a Key to Task-Agnostic Image Fusion
Abstract
Advanced image fusion methods mostly prioritise high-level missions, where task interaction struggles with semantic gaps, requiring complex bridging mechanisms. In contrast, we propose to leverage low-level vision tasks from digital photography fusion, allowing for effective feature interaction through pixel-level supervision. This new paradigm provides strong guidance for unsupervised multimodal fusion without relying on abstract semantics, enhancing task-shared feature learning for broader applicability. Owning to the hybrid image features and enhanced universal representations, the proposed GIFNet supports diverse fusion tasks, achieving high performance across both seen and unseen scenarios with a single model. Uniquely, experimental results reveal that our framework also supports single-modality enhancement, offering superior flexibility for practical applications. Our code will be available at https://github.com/AWCXV/GIFNet.
Cite
Text
Cheng et al. "One Model for ALL: Low-Level Task Interaction Is a Key to Task-Agnostic Image Fusion." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.02617Markdown
[Cheng et al. "One Model for ALL: Low-Level Task Interaction Is a Key to Task-Agnostic Image Fusion." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/cheng2025cvpr-one/) doi:10.1109/CVPR52734.2025.02617BibTeX
@inproceedings{cheng2025cvpr-one,
title = {{One Model for ALL: Low-Level Task Interaction Is a Key to Task-Agnostic Image Fusion}},
author = {Cheng, Chunyang and Xu, Tianyang and Feng, Zhenhua and Wu, Xiaojun and Tang, Zhangyong and Li, Hui and Zhang, Zeyang and Atito, Sara and Awais, Muhammad and Kittler, Josef},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2025},
pages = {28102-28112},
doi = {10.1109/CVPR52734.2025.02617},
url = {https://mlanthology.org/cvpr/2025/cheng2025cvpr-one/}
}