Deep Representation Learning for Metadata Verification
Abstract
Verifying the authenticity of a given image is an emerging topic in media forensics research. Many current works focus on content manipulation detection, which aims to detect possible alteration in the image content. However, tampering might not only occur in the image content itself, but also in the metadata associated with the image, such as timestamp, geo-tag, and captions. We address metadata verification, aiming to verify the authenticity of the metadata associated with the image, using a deep representation learning approach. We propose a deep neural network called Attentive Bilinear Convolutional Neural Networks (AB-CNN) that learns appropriate representation for metadata verification. AB-CNN address several common challenges in verifying a specific type of metadata – event (i.e. time and places), including lack of training data, finegrained differences between distinct events, and diverse visual content within the same event. Experimental results on three different datasets show that the proposed model can provide a substantial improvement over the baseline method.
Cite
Text
Chen and Davis. "Deep Representation Learning for Metadata Verification." IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, 2019. doi:10.1109/WACVW.2019.00019Markdown
[Chen and Davis. "Deep Representation Learning for Metadata Verification." IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, 2019.](https://mlanthology.org/wacvw/2019/chen2019wacvw-deep/) doi:10.1109/WACVW.2019.00019BibTeX
@inproceedings{chen2019wacvw-deep,
title = {{Deep Representation Learning for Metadata Verification}},
author = {Chen, Bor-Chun and Davis, Larry S.},
booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision Workshops},
year = {2019},
pages = {73-82},
doi = {10.1109/WACVW.2019.00019},
url = {https://mlanthology.org/wacvw/2019/chen2019wacvw-deep/}
}