Enriched Deep Recurrent Visual Attention Model for Multiple Object Recognition

Abstract

We design an Enriched Deep Recurrent Visual Attention Model (EDRAM) — an improved attention-based architecture for multiple object recognition. The proposed model is a fully differentiable unit that can be optimized end-to-end by using Stochastic Gradient Descent (SGD). The Spatial Transformer (ST) was employed as visual attention mechanism which allows to learn the geometric transformation of objects within images. With the combination of the Spatial Transformer and the powerful recurrent architecture, the proposed EDRAM can localize and recognize objects simultaneously. EDRAM has been evaluated on two publicly available datasets including MNIST Cluttered (with 70K cluttered digits) and SVHN (with up to 250k real world images of house numbers). Experiments show that it obtains superior performance as compared with the state-of-the-art models.

Cite

Text

Ablavatski et al. "Enriched Deep Recurrent Visual Attention Model for Multiple Object Recognition." IEEE/CVF Winter Conference on Applications of Computer Vision, 2017. doi:10.1109/WACV.2017.113

Markdown

[Ablavatski et al. "Enriched Deep Recurrent Visual Attention Model for Multiple Object Recognition." IEEE/CVF Winter Conference on Applications of Computer Vision, 2017.](https://mlanthology.org/wacv/2017/ablavatski2017wacv-enriched/) doi:10.1109/WACV.2017.113

BibTeX

@inproceedings{ablavatski2017wacv-enriched,
  title     = {{Enriched Deep Recurrent Visual Attention Model for Multiple Object Recognition}},
  author    = {Ablavatski, Artsiom and Lu, Shijian and Cai, Jianfei},
  booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
  year      = {2017},
  pages     = {971-978},
  doi       = {10.1109/WACV.2017.113},
  url       = {https://mlanthology.org/wacv/2017/ablavatski2017wacv-enriched/}
}