IRV: Learning to Integrate Visual Information Across Camera Movements
Abstract
ch pixel in the image. This relationship between corresponding pairs of visual points and camera movement vectors is stored in a representation we call the visual motor calibration map. The map is filled over time from natural observations during development. Such table-based techniques for perceptual-motor development have been used to learn hand-eye coordination [ Mel, 1990 ] and dynamic arm control policies [ Atkeson, 1990 ] . For example, Mel's MURPHY memorized the relationship between the visual position of key points on its arm and the joint angles of the arm in that position. The individual experiences that IRV uses to fill its table are the visual shifts of pixels between successive images. This fundamentally ambiguous correspondence problem can not be determined from any single example. IRV overcomes this ambiguity by accumulating evidence from every repetition of each possible camera movement. Effectively, every apparent pixel correspondence (there
Cite
Text
Prokopowicz and Cooper. "IRV: Learning to Integrate Visual Information Across Camera Movements." International Joint Conference on Artificial Intelligence, 1995.Markdown
[Prokopowicz and Cooper. "IRV: Learning to Integrate Visual Information Across Camera Movements." International Joint Conference on Artificial Intelligence, 1995.](https://mlanthology.org/ijcai/1995/prokopowicz1995ijcai-irv/)BibTeX
@inproceedings{prokopowicz1995ijcai-irv,
title = {{IRV: Learning to Integrate Visual Information Across Camera Movements}},
author = {Prokopowicz, Peter N. and Cooper, Paul R.},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {1995},
pages = {2051-2052},
url = {https://mlanthology.org/ijcai/1995/prokopowicz1995ijcai-irv/}
}