Information Theoretic Learning for Pixel-Based Visual Agents
Abstract
In this paper we promote the idea of using pixel-based models not only for low level vision, but also to extract high level symbolic representations. We use a deep architecture which has the distinctive property of relying on computational units that incorporate classic computer vision invariances and, especially, the scale invariance. The learning algorithm that is proposed, which is based on information theory principles, develops the parameters of the computational units and, at the same time, makes it possible to detect the optimal scale for each pixel. We give experimental evidence of the mechanism of feature extraction at the first level of the hierarchy, which is very much related to SIFT-like features. The comparison shows clearly that, whenever we can rely on the massive availability of training data, the proposed model leads to better performances with respect to SIFT.
Cite
Text
Gori et al. "Information Theoretic Learning for Pixel-Based Visual Agents." European Conference on Computer Vision, 2012. doi:10.1007/978-3-642-33783-3_62Markdown
[Gori et al. "Information Theoretic Learning for Pixel-Based Visual Agents." European Conference on Computer Vision, 2012.](https://mlanthology.org/eccv/2012/gori2012eccv-information/) doi:10.1007/978-3-642-33783-3_62BibTeX
@inproceedings{gori2012eccv-information,
title = {{Information Theoretic Learning for Pixel-Based Visual Agents}},
author = {Gori, Marco and Melacci, Stefano and Lippi, Marco and Maggini, Marco},
booktitle = {European Conference on Computer Vision},
year = {2012},
pages = {864-875},
doi = {10.1007/978-3-642-33783-3_62},
url = {https://mlanthology.org/eccv/2012/gori2012eccv-information/}
}