TransForensics: Image Forgery Localization with Dense Self-Attention
Abstract
Nowadays advanced image editing tools and technical skills produce tampered images more realistically, which can easily evade image forensic systems and make authenticity verification of images more difficult. To tackle this challenging problem, we introduce TransForensics, a novel image forgery localization method inspired by Transformers. The two major components in our framework are dense self-attention encoders and dense correction modules. The former is to model global context and all pairwise interactions between local patches at different scales, while the latter is used for improving the transparency of the hidden layers and correcting the outputs from different branches. Compared to previous traditional and deep learning methods, TransForensics not only can capture discriminative representations and obtain high-quality mask predictions but is also not limited by tampering types and patch sequence orders. By conducting experiments on main benchmarks, we show that TransForensics outperforms the state-of-the-art methods by a large margin.
Cite
Text
Hao et al. "TransForensics: Image Forgery Localization with Dense Self-Attention." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.01478Markdown
[Hao et al. "TransForensics: Image Forgery Localization with Dense Self-Attention." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/hao2021iccv-transforensics/) doi:10.1109/ICCV48922.2021.01478BibTeX
@inproceedings{hao2021iccv-transforensics,
title = {{TransForensics: Image Forgery Localization with Dense Self-Attention}},
author = {Hao, Jing and Zhang, Zhixin and Yang, Shicai and Xie, Di and Pu, Shiliang},
booktitle = {International Conference on Computer Vision},
year = {2021},
pages = {15055-15064},
doi = {10.1109/ICCV48922.2021.01478},
url = {https://mlanthology.org/iccv/2021/hao2021iccv-transforensics/}
}