Unrolling Loopy Top-Down Semantic Feedback in Convolutional Deep Networks
Abstract
In this paper, we propose a novel way to perform top-down semantic feedback in convolutional deep networks for efficient and accurate image parsing. We also show how to add global appearance/semantic features, which have shown to improve image parsing performance in state-of-the-art methods, and was not present in previous convolutional approaches. The proposed method is characterised by an efficient training and a sufficiently fast testing. We use the well known SIFTflow dataset to numerically show the advantages provided by our contributions, and to compare with state-of-the-art image parsing convolutional based approaches.
Cite
Text
Gatta et al. "Unrolling Loopy Top-Down Semantic Feedback in Convolutional Deep Networks." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2014. doi:10.1109/CVPRW.2014.80Markdown
[Gatta et al. "Unrolling Loopy Top-Down Semantic Feedback in Convolutional Deep Networks." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2014.](https://mlanthology.org/cvprw/2014/gatta2014cvprw-unrolling/) doi:10.1109/CVPRW.2014.80BibTeX
@inproceedings{gatta2014cvprw-unrolling,
title = {{Unrolling Loopy Top-Down Semantic Feedback in Convolutional Deep Networks}},
author = {Gatta, Carlo and Romero, Adriana and van de Weijer, Joost},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2014},
pages = {504-511},
doi = {10.1109/CVPRW.2014.80},
url = {https://mlanthology.org/cvprw/2014/gatta2014cvprw-unrolling/}
}