Prediction of Search Targets from Fixations in Open-World Settings
Abstract
Previous work on predicting the target of visual search from human fixations only considered closed-world settings in which training labels are available and predictions are performed for a known set of potential targets. In this work we go beyond the state of the art by studying search target prediction in an open-world setting in which we no longer assume that we have fixation data to train for the search targets. We present a dataset containing fixation data of 18 users searching for natural images from three image categories within synthesised image collages of about 80 images. In a closed-world baseline experiment we show that we can predict the correct target image out of a candidate set of five images. We then present a new problem formulation for search target prediction in the open-world setting that is based on learning compatibilities between fixations and potential targets.
Cite
Text
Sattar et al. "Prediction of Search Targets from Fixations in Open-World Settings." Conference on Computer Vision and Pattern Recognition, 2015. doi:10.1109/CVPR.2015.7298700Markdown
[Sattar et al. "Prediction of Search Targets from Fixations in Open-World Settings." Conference on Computer Vision and Pattern Recognition, 2015.](https://mlanthology.org/cvpr/2015/sattar2015cvpr-prediction/) doi:10.1109/CVPR.2015.7298700BibTeX
@inproceedings{sattar2015cvpr-prediction,
title = {{Prediction of Search Targets from Fixations in Open-World Settings}},
author = {Sattar, Hosnieh and Muller, Sabine and Fritz, Mario and Bulling, Andreas},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2015},
doi = {10.1109/CVPR.2015.7298700},
url = {https://mlanthology.org/cvpr/2015/sattar2015cvpr-prediction/}
}