No Matter Where You Are: Flexible Graph-Guided Multi-Task Learning for Multi-View Head Pose Classification Under Target Motion
Abstract
We propose a novel Multi-Task Learning framework (FEGA-MTL) for classifying the head pose of a person who moves freely in an environment monitored by multiple, large field-of-view surveillance cameras. As the target (person) moves, distortions in facial appearance owing to camera perspective and scale severely impede performance of traditional head pose classification methods. FEGA-MTL operates on a dense uniform spatial grid and learns appearance relationships across partitions as well as partition-specific appearance variations for a given head pose to build region-specific classifiers. Guided by two graphs which a-priori model appearance similarity among (i) grid partitions based on camera geometry and (ii) head pose classes, the learner efficiently clusters appearancewise related grid partitions to derive the optimal partitioning. For pose classification, upon determining the target's position using a person tracker, the appropriate regionspecific classifier is invoked. Experiments confirm that FEGA-MTL achieves state-of-the-art classification with few training data.
Cite
Text
Yan et al. "No Matter Where You Are: Flexible Graph-Guided Multi-Task Learning for Multi-View Head Pose Classification Under Target Motion." International Conference on Computer Vision, 2013. doi:10.1109/ICCV.2013.150Markdown
[Yan et al. "No Matter Where You Are: Flexible Graph-Guided Multi-Task Learning for Multi-View Head Pose Classification Under Target Motion." International Conference on Computer Vision, 2013.](https://mlanthology.org/iccv/2013/yan2013iccv-matter/) doi:10.1109/ICCV.2013.150BibTeX
@inproceedings{yan2013iccv-matter,
title = {{No Matter Where You Are: Flexible Graph-Guided Multi-Task Learning for Multi-View Head Pose Classification Under Target Motion}},
author = {Yan, Yan and Ricci, Elisa and Subramanian, Ramanathan and Lanz, Oswald and Sebe, Nicu},
booktitle = {International Conference on Computer Vision},
year = {2013},
doi = {10.1109/ICCV.2013.150},
url = {https://mlanthology.org/iccv/2013/yan2013iccv-matter/}
}