Learning Saccadic Eye Movements Using Multiscale Spatial Filters
Abstract
We describe a framework for learning saccadic eye movements using a photometric representation of target points in natural scenes. The rep(cid:173) resentation takes the form of a high-dimensional vector comprised of the responses of spatial filters at different orientations and scales. We first demonstrate the use of this response vector in the task of locating pre(cid:173) viously foveated points in a scene and subsequently use this property in a multisaccade strategy to derive an adaptive motor map for delivering accurate saccades.
Cite
Text
Rao and Ballard. "Learning Saccadic Eye Movements Using Multiscale Spatial Filters." Neural Information Processing Systems, 1994.Markdown
[Rao and Ballard. "Learning Saccadic Eye Movements Using Multiscale Spatial Filters." Neural Information Processing Systems, 1994.](https://mlanthology.org/neurips/1994/rao1994neurips-learning/)BibTeX
@inproceedings{rao1994neurips-learning,
title = {{Learning Saccadic Eye Movements Using Multiscale Spatial Filters}},
author = {Rao, Rajesh P. N. and Ballard, Dana H.},
booktitle = {Neural Information Processing Systems},
year = {1994},
pages = {893-900},
url = {https://mlanthology.org/neurips/1994/rao1994neurips-learning/}
}