Weakly- and Self-Supervised Learning for Content-Aware Deep Image Retargeting
Abstract
This paper proposes a weakly- and self-supervised deep convolutional neural network (WSSDCNN) for content-aware image retargeting. Our network takes a source image and a target aspect ratio, and then directly outputs a retargeted image. Retargeting is performed through a shift map, which is a pixel-wise mapping from the source to the target grid. Our method implicitly learns an attention map, which leads to a content-aware shift map for image retargeting. As a result, discriminative parts in an image are preserved, while background regions are adjusted seamlessly. In the training phase, pairs of an image and its image level annotation are used to compute content and structure losses. We demonstrate the effectiveness of our proposed method for a retargeting application with insightful analyses.
Cite
Text
Cho et al. "Weakly- and Self-Supervised Learning for Content-Aware Deep Image Retargeting." International Conference on Computer Vision, 2017. doi:10.1109/ICCV.2017.488Markdown
[Cho et al. "Weakly- and Self-Supervised Learning for Content-Aware Deep Image Retargeting." International Conference on Computer Vision, 2017.](https://mlanthology.org/iccv/2017/cho2017iccv-weakly/) doi:10.1109/ICCV.2017.488BibTeX
@inproceedings{cho2017iccv-weakly,
title = {{Weakly- and Self-Supervised Learning for Content-Aware Deep Image Retargeting}},
author = {Cho, Donghyeon and Park, Jinsun and Oh, Tae-Hyun and Tai, Yu-Wing and Kweon, In So},
booktitle = {International Conference on Computer Vision},
year = {2017},
doi = {10.1109/ICCV.2017.488},
url = {https://mlanthology.org/iccv/2017/cho2017iccv-weakly/}
}