A POMDP Model of Eye-Hand Coordination
Abstract
This paper presents a generative model of eye-hand coordination. We use numerical optimization to solve for the joint behavior of an eye and two hands, deriving a predicted motion pattern from first principles, without imposing heuristics. We model the planar scene as a POMDP with 17 continuous state dimensions. Belief-space optimization is facilitated by using a nominal-belief heuristic, whereby we assume (during planning) that the maximum likelihood observation is always obtained. Since a globally-optimal solution for such a high-dimensional domain is computationally intractable, we employ local optimization in the belief domain. By solving for a locally-optimal plan through belief space, we generate a motion pattern of mutual coordination between hands and eye: the eye's saccades disambiguate the scene in a task-relevant manner, and the hands' motions anticipate the eye's saccades. Finally, the model is validated through a behavioral experiment, in which human subjects perform the same eye-hand coordination task. We show how simulation is congruent with the experimental results.
Cite
Text
Erez et al. "A POMDP Model of Eye-Hand Coordination." AAAI Conference on Artificial Intelligence, 2011. doi:10.1609/AAAI.V25I1.8007Markdown
[Erez et al. "A POMDP Model of Eye-Hand Coordination." AAAI Conference on Artificial Intelligence, 2011.](https://mlanthology.org/aaai/2011/erez2011aaai-pomdp/) doi:10.1609/AAAI.V25I1.8007BibTeX
@inproceedings{erez2011aaai-pomdp,
title = {{A POMDP Model of Eye-Hand Coordination}},
author = {Erez, Tom and Tramper, Julian J. and Smart, William D. and Gielen, Stan C. A. M.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2011},
pages = {952-957},
doi = {10.1609/AAAI.V25I1.8007},
url = {https://mlanthology.org/aaai/2011/erez2011aaai-pomdp/}
}