InterActive: Inter-Layer Activeness Propagation
Abstract
An increasing number of computer vision tasks can be tackled with deep features, which are the intermediate outputs of a pre-trained Convolutional Neural Network. Despite the astonishing performance, deep features extracted from low-level neurons are still below satisfaction, arguably because they cannot access the spatial context contained in the higher layers. In this paper, we present InterActive, a novel algorithm which computes the activeness of neurons and network connections. Activeness is propagated through a neural network in a top-down manner, carrying high-level context and improving the descriptive power of low-level and mid-level neurons. Visualization indicates that neuron activeness can be interpreted as spatial-weighted neuron responses. We achieve state-of-the-art classification performance on a wide range of image datasets.
Cite
Text
Xie et al. "InterActive: Inter-Layer Activeness Propagation." Conference on Computer Vision and Pattern Recognition, 2016. doi:10.1109/CVPR.2016.36Markdown
[Xie et al. "InterActive: Inter-Layer Activeness Propagation." Conference on Computer Vision and Pattern Recognition, 2016.](https://mlanthology.org/cvpr/2016/xie2016cvpr-interactive/) doi:10.1109/CVPR.2016.36BibTeX
@inproceedings{xie2016cvpr-interactive,
title = {{InterActive: Inter-Layer Activeness Propagation}},
author = {Xie, Lingxi and Zheng, Liang and Wang, Jingdong and Yuille, Alan L. and Tian, Qi},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2016},
doi = {10.1109/CVPR.2016.36},
url = {https://mlanthology.org/cvpr/2016/xie2016cvpr-interactive/}
}