TransMEF: A Transformer-Based Multi-Exposure Image Fusion Framework Using Self-Supervised Multi-Task Learning

Abstract

In this paper, we propose TransMEF, a transformer-based multi-exposure image fusion framework that uses self-supervised multi-task learning. The framework is based on an encoder-decoder network, which can be trained on large natural image datasets and does not require ground truth fusion images. We design three self-supervised reconstruction tasks according to the characteristics of multi-exposure images and conduct these tasks simultaneously using multi-task learning; through this process, the network can learn the characteristics of multi-exposure images and extract more generalized features. In addition, to compensate for the defect in establishing long-range dependencies in CNN-based architectures, we design an encoder that combines a CNN module with a transformer module. This combination enables the network to focus on both local and global information. We evaluated our method and compared it to 11 competitive traditional and deep learning-based methods on the latest released multi-exposure image fusion benchmark dataset, and our method achieved the best performance in both subjective and objective evaluations. Code will be available at https://github.com/miccaiif/TransMEF.

Cite

Text

Qu et al. "TransMEF: A Transformer-Based Multi-Exposure Image Fusion Framework Using Self-Supervised Multi-Task Learning." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I2.20109

Markdown

[Qu et al. "TransMEF: A Transformer-Based Multi-Exposure Image Fusion Framework Using Self-Supervised Multi-Task Learning." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/qu2022aaai-transmef/) doi:10.1609/AAAI.V36I2.20109

BibTeX

@inproceedings{qu2022aaai-transmef,
  title     = {{TransMEF: A Transformer-Based Multi-Exposure Image Fusion Framework Using Self-Supervised Multi-Task Learning}},
  author    = {Qu, Linhao and Liu, Shaolei and Wang, Manning and Song, Zhijian},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {2126-2134},
  doi       = {10.1609/AAAI.V36I2.20109},
  url       = {https://mlanthology.org/aaai/2022/qu2022aaai-transmef/}
}