Multi-Modality Deep Network for JPEG Artifacts Reduction
Abstract
In recent years, many convolutional neural network-based models are designed for JPEG artifacts reduction, and have achieved notable progress. However, few methods are suitable for extreme low-bitrate image compression artifacts reduction. The main challenge is that the highly compressed image loses too much information, resulting in reconstructing high-quality image difficultly. To address this issue, we propose a multimodal fusion learning method for text-guided JPEG artifacts reduction, in which the corresponding text description not only provides the potential prior information of the highly compressed image, but also serves as supplementary information to assist in image deblocking. We fuse image features and text semantic features from the global and local perspectives respectively, and design a contrastive loss built upon contrastive learning to produce visually pleasing results. Extensive experiments, including a user study, prove that our method can obtain better deblocking results compared to the state-of-the-art methods.
Cite
Text
Jiang et al. "Multi-Modality Deep Network for JPEG Artifacts Reduction." International Joint Conference on Artificial Intelligence, 2023. doi:10.24963/IJCAI.2023/429Markdown
[Jiang et al. "Multi-Modality Deep Network for JPEG Artifacts Reduction." International Joint Conference on Artificial Intelligence, 2023.](https://mlanthology.org/ijcai/2023/jiang2023ijcai-multi/) doi:10.24963/IJCAI.2023/429BibTeX
@inproceedings{jiang2023ijcai-multi,
title = {{Multi-Modality Deep Network for JPEG Artifacts Reduction}},
author = {Jiang, Xuhao and Tan, Weimin and Lin, Qing and Ma, Chenxi and Yan, Bo and Shen, Liquan},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2023},
pages = {3857-3865},
doi = {10.24963/IJCAI.2023/429},
url = {https://mlanthology.org/ijcai/2023/jiang2023ijcai-multi/}
}