A Visual Attention Algorithm Designed for Coupled Oscillator Acceleration

Abstract

We present a new top-down and bottom-up saliency algorithm designed to exploit the capabilities of coupled oscillators: an ultra-low-power, high performance, non-boolean computer architecture designed to serve as a special purpose embedded accelerator for vision applications. To do this, we extend a widely used neuromorphic bottom-up saliency pipeline by introducing a top-down channel which looks for objects of a particular type. The proposed channel relies on a segmentation of the input image to identify exemplar object segments resembling those encountered in training. The channel leverages pre-computed bottom-up feature maps to produce a novel scale-invariant descriptor for each segment with little computational overhead. We also introduce a new technique to automatically determine exemplar segments during training, without the need for annotations per segment. We evaluate our method on both NeoVision2 DARPA challenge datasets, illustrating significant gains in performance compared to all baseline approaches.

Cite

Text

Thomas et al. "A Visual Attention Algorithm Designed for Coupled Oscillator Acceleration." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2016. doi:10.1109/CVPRW.2016.108

Markdown

[Thomas et al. "A Visual Attention Algorithm Designed for Coupled Oscillator Acceleration." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2016.](https://mlanthology.org/cvprw/2016/thomas2016cvprw-visual/) doi:10.1109/CVPRW.2016.108

BibTeX

@inproceedings{thomas2016cvprw-visual,
  title     = {{A Visual Attention Algorithm Designed for Coupled Oscillator Acceleration}},
  author    = {Thomas, Christopher and Kovashka, Adriana and Chiarulli, Donald M. and Levitan, Steven P.},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2016},
  pages     = {828-836},
  doi       = {10.1109/CVPRW.2016.108},
  url       = {https://mlanthology.org/cvprw/2016/thomas2016cvprw-visual/}
}