Towards Task Understanding in Visual Settings

Abstract

We consider the problem of understanding real world tasks depicted in visual images. While most existing image captioning methods excel in producing natural language descriptions of visual scenes involving human tasks, there is often the need for an understanding of the exact task being undertaken rather than a literal description of the scene. We leverage insights from real world task understanding systems, and propose a framework composed of convolutional neural networks, and an external hierarchical task ontology to produce task descriptions from input images. Detailed experiments highlight the efficacy of the extracted descriptions, which could potentially find their way in many applications, including image alt text generation.

Cite

Text

Santy et al. "Towards Task Understanding in Visual Settings." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.330110027

Markdown

[Santy et al. "Towards Task Understanding in Visual Settings." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/santy2019aaai-task/) doi:10.1609/AAAI.V33I01.330110027

BibTeX

@inproceedings{santy2019aaai-task,
  title     = {{Towards Task Understanding in Visual Settings}},
  author    = {Santy, Sebastin and Zulfikar, Wazeer and Mehrotra, Rishabh and Yilmaz, Emine},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2019},
  pages     = {10027-10028},
  doi       = {10.1609/AAAI.V33I01.330110027},
  url       = {https://mlanthology.org/aaai/2019/santy2019aaai-task/}
}