Learning to Rank Images from Eye Movements
Abstract
Combining multiple information sources can improve the accuracy of search in information retrieval. This paper presents a new image search strategy which combines image features together with implicit feedback from users' eye movements, using them to rank images. In order to better deal with larger data sets, we present a perceptron formulation of the Ranking Support Vector Machine algorithm. We present initial results on inferring the rank of images presented in a page based on simple image features and implicit feedback of users. The results show that the perceptron algorithm improves the results, and that fusing eye movements and image histograms gives better rankings to images than either of these features alone.
Cite
Text
Pasupa et al. "Learning to Rank Images from Eye Movements." IEEE/CVF International Conference on Computer Vision Workshops, 2009. doi:10.1109/ICCVW.2009.5457528Markdown
[Pasupa et al. "Learning to Rank Images from Eye Movements." IEEE/CVF International Conference on Computer Vision Workshops, 2009.](https://mlanthology.org/iccvw/2009/pasupa2009iccvw-learning/) doi:10.1109/ICCVW.2009.5457528BibTeX
@inproceedings{pasupa2009iccvw-learning,
title = {{Learning to Rank Images from Eye Movements}},
author = {Pasupa, Kitsuchart and Saunders, Craig and Szedmák, Sándor and Klami, Arto and Kaski, Samuel and Gunn, Steve R.},
booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
year = {2009},
pages = {2009-2016},
doi = {10.1109/ICCVW.2009.5457528},
url = {https://mlanthology.org/iccvw/2009/pasupa2009iccvw-learning/}
}