Learning Lie Groups for Invariant Visual Perception
Abstract
One of the most important problems in visual perception is that of visual in(cid:173) variance: how are objects perceived to be the same despite undergoing transfor(cid:173) mations such as translations, rotations or scaling? In this paper, we describe a Bayesian method for learning invariances based on Lie group theory. We show that previous approaches based on first-order Taylor series expansions of inputs can be regarded as special cases of the Lie group approach, the latter being ca(cid:173) pable of handling in principle arbitrarily large transfonnations. Using a matrix(cid:173) exponential based generative model of images, we derive an unsupervised al(cid:173) gorithm for learning Lie group operators from input data containing infinites(cid:173) imal transfonnations. The on-line unsupervised learning algorithm maximizes the posterior probability of generating the training data. We provide experimen(cid:173) tal results suggesting that the proposed method can learn Lie group operators for handling reasonably large I-D translations and 2-D rotations.
Cite
Text
Rao and Ruderman. "Learning Lie Groups for Invariant Visual Perception." Neural Information Processing Systems, 1998.Markdown
[Rao and Ruderman. "Learning Lie Groups for Invariant Visual Perception." Neural Information Processing Systems, 1998.](https://mlanthology.org/neurips/1998/rao1998neurips-learning/)BibTeX
@inproceedings{rao1998neurips-learning,
title = {{Learning Lie Groups for Invariant Visual Perception}},
author = {Rao, Rajesh P. N. and Ruderman, Daniel L.},
booktitle = {Neural Information Processing Systems},
year = {1998},
pages = {810-816},
url = {https://mlanthology.org/neurips/1998/rao1998neurips-learning/}
}