Saliency Prediction for Mobile User Interfaces

Abstract

We introduce models for saliency prediction for mobile user interfaces. A mobile interface may include elements like buttons and text in addition to natural images which enable performing a variety of tasks. Saliency in natural images is a well studied topic. However, given the difference in what constitutes a mobile interface, and the usage context of these devices, we postulate that saliency prediction for mobile interface images requires a fresh approach. Mobile interface design involves operating on elements, the building blocks of the interface. We first collected eye-gaze data from mobile devices for a free viewing task. Using this data, we develop a novel autoencoder based multi-scale deep learning model that provides saliency prediction at the mobile interface element level. Compared to saliency prediction approaches developed for natural images, we show that our approach performs significantly better on a range of established metrics.

Cite

Text

Gupta et al. "Saliency Prediction for Mobile User Interfaces." IEEE/CVF Winter Conference on Applications of Computer Vision, 2018. doi:10.1109/WACV.2018.00171

Markdown

[Gupta et al. "Saliency Prediction for Mobile User Interfaces." IEEE/CVF Winter Conference on Applications of Computer Vision, 2018.](https://mlanthology.org/wacv/2018/gupta2018wacv-saliency/) doi:10.1109/WACV.2018.00171

BibTeX

@inproceedings{gupta2018wacv-saliency,
  title     = {{Saliency Prediction for Mobile User Interfaces}},
  author    = {Gupta, Prakhar and Gupta, Shubh and Jayagopal, Ajaykrishnan and Pal, Sourav and Sinha, Ritwik},
  booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
  year      = {2018},
  pages     = {1529-1538},
  doi       = {10.1109/WACV.2018.00171},
  url       = {https://mlanthology.org/wacv/2018/gupta2018wacv-saliency/}
}