CREST: Convolutional Residual Learning for Visual Tracking
Abstract
Discriminative correlation filters (DCFs) have \ryn been shown to perform superiorly in visual tracking. They \ryn only need a small set of training samples from the initial frame to generate an appearance model. However, existing DCFs learn the filters separately from feature extraction, and update these filters using a moving average operation with an empirical weight. These DCF trackers hardly benefit from the end-to-end training. In this paper, we propose the CREST algorithm to reformulate DCFs as a one-layer convolutional neural network. Our method integrates feature extraction, response map generation as well as model update into the neural networks for an end-to-end training. To reduce model degradation during online update, we apply residual learning to take appearance changes into account. Extensive experiments on the benchmark datasets demonstrate that our CREST tracker performs favorably against state-of-the-art trackers.
Cite
Text
Song et al. "CREST: Convolutional Residual Learning for Visual Tracking." International Conference on Computer Vision, 2017. doi:10.1109/ICCV.2017.279Markdown
[Song et al. "CREST: Convolutional Residual Learning for Visual Tracking." International Conference on Computer Vision, 2017.](https://mlanthology.org/iccv/2017/song2017iccv-crest/) doi:10.1109/ICCV.2017.279BibTeX
@inproceedings{song2017iccv-crest,
title = {{CREST: Convolutional Residual Learning for Visual Tracking}},
author = {Song, Yibing and Ma, Chao and Gong, Lijun and Zhang, Jiawei and Lau, Rynson W. H. and Yang, Ming-Hsuan},
booktitle = {International Conference on Computer Vision},
year = {2017},
doi = {10.1109/ICCV.2017.279},
url = {https://mlanthology.org/iccv/2017/song2017iccv-crest/}
}