Detecting Semantic Anomalies

Abstract

We critically appraise the recent interest in out-of-distribution (OOD) detection and question the practical relevance of existing benchmarks. While the currently prevalent trend is to consider different datasets as OOD, we argue that out-distributions of practical interest are ones where the distinction is semantic in nature for a specified context, and that evaluative tasks should reflect this more closely. Assuming a context of object recognition, we recommend a set of benchmarks, motivated by practical applications. We make progress on these benchmarks by exploring a multi-task learning based approach, showing that auxiliary objectives for improved semantic awareness result in improved semantic anomaly detection, with accompanying generalization benefits.

Cite

Text

Ahmed and Courville. "Detecting Semantic Anomalies." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I04.5712

Markdown

[Ahmed and Courville. "Detecting Semantic Anomalies." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/ahmed2020aaai-detecting/) doi:10.1609/AAAI.V34I04.5712

BibTeX

@inproceedings{ahmed2020aaai-detecting,
  title     = {{Detecting Semantic Anomalies}},
  author    = {Ahmed, Faruk and Courville, Aaron C.},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2020},
  pages     = {3154-3162},
  doi       = {10.1609/AAAI.V34I04.5712},
  url       = {https://mlanthology.org/aaai/2020/ahmed2020aaai-detecting/}
}