Feature Regression for Multimodal Image Analysis
Abstract
In this paper, we analyze the relationship between the corresponding descriptors computed from multimodal images with focus on visual and infrared images. First the descriptors are regressed by means of linear regression as well as Gaussian process. We apply different covariance functions and inference methods for Gaussian process. Then the descriptors detected from visual images are mapped to infrared images through the regression results. Predictions are assessed in two ways: the statistics of absolute error between true values and actual values, and the precision score of matching the predicted descriptors to the original infrared descriptors. Experimental results show that regression methods achieve a well-assessed relationship between corresponding descriptors from multiple modalities.
Cite
Text
Yang et al. "Feature Regression for Multimodal Image Analysis." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2014. doi:10.1109/CVPRW.2014.118Markdown
[Yang et al. "Feature Regression for Multimodal Image Analysis." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2014.](https://mlanthology.org/cvprw/2014/yang2014cvprw-feature/) doi:10.1109/CVPRW.2014.118BibTeX
@inproceedings{yang2014cvprw-feature,
title = {{Feature Regression for Multimodal Image Analysis}},
author = {Yang, Michael Ying and Yong, Xuanzi and Rosenhahn, Bodo},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2014},
pages = {770-777},
doi = {10.1109/CVPRW.2014.118},
url = {https://mlanthology.org/cvprw/2014/yang2014cvprw-feature/}
}