Guided Perturbations: Self-Corrective Behavior in Convolutional Neural Networks
Abstract
Convolutional Neural Networks have been a subject of great importance over the past decade and great strides have been made in their utility for producing state of the art performance in many computer vision problems. However, the behavior of deep networks is yet to be fully understood and is still an active area of research. In this work, we present an intriguing behavior: pre-trained CNNs can be made to improve their predictions by structurally perturbing the input. We observe that these perturbations - referred as Guided Perturbations - enable a trained network to improve its prediction performance without any learning or change in network weights. We perform various ablative experiments to understand how these perturbations affect the local context and feature representations. Furthermore, we demonstrate that this idea can improve performance of several existing approaches on semantic segmentation and scene labeling tasks on the PASCAL VOC dataset and supervised classification tasks on MNIST and CIFAR10 datasets.
Cite
Text
Sankaranarayanan et al. "Guided Perturbations: Self-Corrective Behavior in Convolutional Neural Networks." International Conference on Computer Vision, 2017. doi:10.1109/ICCV.2017.385Markdown
[Sankaranarayanan et al. "Guided Perturbations: Self-Corrective Behavior in Convolutional Neural Networks." International Conference on Computer Vision, 2017.](https://mlanthology.org/iccv/2017/sankaranarayanan2017iccv-guided/) doi:10.1109/ICCV.2017.385BibTeX
@inproceedings{sankaranarayanan2017iccv-guided,
title = {{Guided Perturbations: Self-Corrective Behavior in Convolutional Neural Networks}},
author = {Sankaranarayanan, Swami and Jain, Arpit and Lim, Ser Nam},
booktitle = {International Conference on Computer Vision},
year = {2017},
doi = {10.1109/ICCV.2017.385},
url = {https://mlanthology.org/iccv/2017/sankaranarayanan2017iccv-guided/}
}