Visual Task Inference Using Hidden Markov Models

Abstract

It has been known for a long time that visual task, such as reading, counting and searching, greatly influences eye movement patterns. Perhaps the best known demonstration of this is the celebrated study of Yarbus showing that different eye movement trajectories emerge depending on the visual task that the viewers are given. The objective of this paper is to develop an inverse Yarbus process whereby we can infer the visual task by observing the measurements of a viewer’s eye movements while executing the visual task. The method we are proposing is to use Hidden Markov Models (HMMs) to create a probabilistic framework to infer the viewer’s task from eye movements.

Cite

Text

Abolhassani and Clark. "Visual Task Inference Using Hidden Markov Models." International Joint Conference on Artificial Intelligence, 2011. doi:10.5591/978-1-57735-516-8/IJCAI11-282

Markdown

[Abolhassani and Clark. "Visual Task Inference Using Hidden Markov Models." International Joint Conference on Artificial Intelligence, 2011.](https://mlanthology.org/ijcai/2011/abolhassani2011ijcai-visual/) doi:10.5591/978-1-57735-516-8/IJCAI11-282

BibTeX

@inproceedings{abolhassani2011ijcai-visual,
  title     = {{Visual Task Inference Using Hidden Markov Models}},
  author    = {Abolhassani, Amin Haji and Clark, James J.},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2011},
  pages     = {1678-1683},
  doi       = {10.5591/978-1-57735-516-8/IJCAI11-282},
  url       = {https://mlanthology.org/ijcai/2011/abolhassani2011ijcai-visual/}
}