Kernel-Based Template Alignment
Abstract
This paper introduces a novel kernel-based method for template tracking in video sequences. The method is derived for a general warping transformation, and its application to affine motion tracking is further explored. Our approach is based on maximization of the multi-kernel Bhattacharyya coefficient with respect to the warp parameters. We explicitly compute the gradient of the similarity functional, and use a quasi-Newton procedure for optimization. Additionally, we consider a simple extension of the method that employs an illumination model correction to allow tracking under varying lighting conditions. The resulting tracking procedure is evaluated on a number of examples including large templates tracking non-rigidly moving textured areas.
Cite
Text
Guskov. "Kernel-Based Template Alignment." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2006. doi:10.1109/CVPR.2006.162Markdown
[Guskov. "Kernel-Based Template Alignment." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2006.](https://mlanthology.org/cvpr/2006/guskov2006cvpr-kernel/) doi:10.1109/CVPR.2006.162BibTeX
@inproceedings{guskov2006cvpr-kernel,
title = {{Kernel-Based Template Alignment}},
author = {Guskov, Igor},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2006},
pages = {610-617},
doi = {10.1109/CVPR.2006.162},
url = {https://mlanthology.org/cvpr/2006/guskov2006cvpr-kernel/}
}