Visual Explanation by High-Level Abduction: On Answer-Set Programming Driven Reasoning About Moving Objects
Abstract
We propose a hybrid architecture for systematically computing robust visual explanation(s) encompassing hypothesis formation, belief revision, and default reasoning with video data. The architecture consists of two tightly integrated synergistic components: (1) (functional) answer set programming based abductive reasoning with space-time tracklets as native entities; and (2) a visual processing pipeline for detection based object tracking and motion analysis. We present the formal framework, its general implementation as a (declarative) method in answer set programming, and an example application and evaluation based on two diverse video datasets: the MOTChallenge benchmark developed by the vision community, and a recently developed Movie Dataset.
Cite
Text
Suchan et al. "Visual Explanation by High-Level Abduction: On Answer-Set Programming Driven Reasoning About Moving Objects." AAAI Conference on Artificial Intelligence, 2018. doi:10.1609/AAAI.V32I1.11569Markdown
[Suchan et al. "Visual Explanation by High-Level Abduction: On Answer-Set Programming Driven Reasoning About Moving Objects." AAAI Conference on Artificial Intelligence, 2018.](https://mlanthology.org/aaai/2018/suchan2018aaai-visual/) doi:10.1609/AAAI.V32I1.11569BibTeX
@inproceedings{suchan2018aaai-visual,
title = {{Visual Explanation by High-Level Abduction: On Answer-Set Programming Driven Reasoning About Moving Objects}},
author = {Suchan, Jakob and Bhatt, Mehul and Walega, Przemyslaw Andrzej and Schultz, Carl},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2018},
pages = {1965-1972},
doi = {10.1609/AAAI.V32I1.11569},
url = {https://mlanthology.org/aaai/2018/suchan2018aaai-visual/}
}