Object Referring in Visual Scene with Spoken Language

Abstract

Object referring has important applications, especially for human-machine interaction. While having received great attention, the task is mainly attacked with written language (text) as input rather than spoken language (speech), which is more natural. This paper investigates Object Referring with Spoken Language (ORSpoken) by presenting two datasets and one novel approach. Objects are annotated with their locations in images, text descriptions and speech descriptions. This makes the datasets ideal for multi-modality learning. The approach is developed by carefully taking down ORSpoken problem into three sub-problems and introducing taskspecific vision-language interactions at the corresponding levels. Experiments show that our method outperforms competing methods consistently and significantly. The approach is also evaluated in the presence of audio noise, showing the efficacy of the proposed vision-language interaction methods in counteracting background noise.

Cite

Text

Vasudevan and Dai. "Object Referring in Visual Scene with Spoken Language." IEEE/CVF Winter Conference on Applications of Computer Vision, 2018. doi:10.1109/WACV.2018.00206

Markdown

[Vasudevan and Dai. "Object Referring in Visual Scene with Spoken Language." IEEE/CVF Winter Conference on Applications of Computer Vision, 2018.](https://mlanthology.org/wacv/2018/vasudevan2018wacv-object/) doi:10.1109/WACV.2018.00206

BibTeX

@inproceedings{vasudevan2018wacv-object,
  title     = {{Object Referring in Visual Scene with Spoken Language}},
  author    = {Vasudevan, Arun Balajee and Dai, Dengxin},
  booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
  year      = {2018},
  pages     = {1861-1870},
  doi       = {10.1109/WACV.2018.00206},
  url       = {https://mlanthology.org/wacv/2018/vasudevan2018wacv-object/}
}