Multimedia Data for the Visually Impaired
Abstract
The Web contains a large amount of information in the form of videos that remains inaccessible to the visually impaired people. We identify a class of videos whose information content can be approximately encoded as an audio, thereby increasing the amount of accessible videos. We propose a model to automatically identify such videos. Our model jointly relies on the textual metadata and visual content of the video. We use this model to re-rank Youtube video search results based on accessibility of the video. We present preliminary results by conducting a user study with visually impaired people to measure the effectiveness of our system.
Cite
Text
Tandon et al. "Multimedia Data for the Visually Impaired." AAAI Conference on Artificial Intelligence, 2015. doi:10.1609/AAAI.V29I1.9727Markdown
[Tandon et al. "Multimedia Data for the Visually Impaired." AAAI Conference on Artificial Intelligence, 2015.](https://mlanthology.org/aaai/2015/tandon2015aaai-multimedia/) doi:10.1609/AAAI.V29I1.9727BibTeX
@inproceedings{tandon2015aaai-multimedia,
title = {{Multimedia Data for the Visually Impaired}},
author = {Tandon, Niket and Sharma, Shekhar and Makkad, Tanima},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2015},
pages = {4210-4211},
doi = {10.1609/AAAI.V29I1.9727},
url = {https://mlanthology.org/aaai/2015/tandon2015aaai-multimedia/}
}