Further Results on Predicting Cognitive Abilities for Adaptive Visualizations
Abstract
Previous work has shown that some user cognitive abilities relevant for processing information visualizations can be predicted from eye tracking data. Performing this type of user modeling is important for devising user-adaptive visualizations that can adapt to a user’s abilities as needed during the interaction. In this paper, we contribute to previous work by extending the type of visualizations considered and the set of cognitive abilities that can be predicted from gaze data, thus providing evidence on the generality of these findings. We also evaluate how quality of gaze data impacts prediction.
Cite
Text
Conati et al. "Further Results on Predicting Cognitive Abilities for Adaptive Visualizations." International Joint Conference on Artificial Intelligence, 2017. doi:10.24963/IJCAI.2017/217Markdown
[Conati et al. "Further Results on Predicting Cognitive Abilities for Adaptive Visualizations." International Joint Conference on Artificial Intelligence, 2017.](https://mlanthology.org/ijcai/2017/conati2017ijcai-further/) doi:10.24963/IJCAI.2017/217BibTeX
@inproceedings{conati2017ijcai-further,
title = {{Further Results on Predicting Cognitive Abilities for Adaptive Visualizations}},
author = {Conati, Cristina and Lallé, Sébastien and Rahman, Md. Abed and Toker, Dereck},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2017},
pages = {1568-1574},
doi = {10.24963/IJCAI.2017/217},
url = {https://mlanthology.org/ijcai/2017/conati2017ijcai-further/}
}