Mid-Vision Feedback

Abstract

Feedback plays a prominent role in biological vision, where perception is modulated based on agents' evolving expectations and world model. We introduce a novel mechanism which modulates perception based on high level categorical expectations: Mid-Vision Feedback (MVF). MVF associates high level contexts with linear transformations. When a context is "expected" its associated linear transformation is applied over feature vectors in a mid level of a network. The result is that mid-level network representations are biased towards conformance with high level expectations, improving overall accuracy and contextual consistency. Additionally, during training mid-level feature vectors are biased through introduction of a loss term which increases the distance between feature vectors associated with different contexts. MVF is agnostic as to the source of contextual expectations, and can serve as a mechanism for top down integration of symbolic systems with deep vision architectures. We show the superior performance of MVF to post-hoc filtering for incorporation of contextual knowledge, and show superior performance of configurations using predicted context (when no context is known a priori) over configurations with no context awareness.

Cite

Text

Maynord et al. "Mid-Vision Feedback." International Conference on Learning Representations, 2023.

Markdown

[Maynord et al. "Mid-Vision Feedback." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/maynord2023iclr-midvision/)

BibTeX

@inproceedings{maynord2023iclr-midvision,
  title     = {{Mid-Vision Feedback}},
  author    = {Maynord, Michael and Dessalene, Eadom T and Fermuller, Cornelia and Aloimonos, Yiannis},
  booktitle = {International Conference on Learning Representations},
  year      = {2023},
  url       = {https://mlanthology.org/iclr/2023/maynord2023iclr-midvision/}
}