Jointly Learning Heterogeneous Features for RGB-D Activity Recognition
Abstract
In this paper, we focus on heterogeneous feature learning for RGB-D activity recognition. Considering that features from different channels could share some similar hidden structures, we propose a joint learning model to simultaneously explore the shared and feature-specific components as an instance of heterogenous multi-task learning. The proposed model in an unified framework is capable of: 1) jointly mining a set of subspaces with the same dimensionality to enable the multi-task classifier learning, and 2) meanwhile, quantifying the shared and feature-specific components of features in the subspaces. To efficiently train the joint model, a three-step iterative optimization algorithm is proposed, followed by two inference models. Extensive results on three activity datasets have demonstrated the efficacy of the proposed method. In addition, a novel RGB-D activity dataset focusing on human-object interaction is collected for evaluating the proposed method, which will be made available to the community for RGB-D activity benchmarking and analysis.
Cite
Text
Hu et al. "Jointly Learning Heterogeneous Features for RGB-D Activity Recognition." Conference on Computer Vision and Pattern Recognition, 2015. doi:10.1109/CVPR.2015.7299172Markdown
[Hu et al. "Jointly Learning Heterogeneous Features for RGB-D Activity Recognition." Conference on Computer Vision and Pattern Recognition, 2015.](https://mlanthology.org/cvpr/2015/hu2015cvpr-jointly/) doi:10.1109/CVPR.2015.7299172BibTeX
@inproceedings{hu2015cvpr-jointly,
title = {{Jointly Learning Heterogeneous Features for RGB-D Activity Recognition}},
author = {Hu, Jian-Fang and Zheng, Wei-Shi and Lai, Jianhuang and Zhang, Jianguo},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2015},
doi = {10.1109/CVPR.2015.7299172},
url = {https://mlanthology.org/cvpr/2015/hu2015cvpr-jointly/}
}