Scalable Multitask Representation Learning for Scene Classification
Abstract
The underlying idea of multitask learning is that learning tasks jointly is better than learning each task individually. In particular, if only a few training examples are available for each task, sharing a jointly trained representation improves classification performance. In this paper, we propose a novel multitask learning method that learns a low-dimensional representation jointly with the corresponding classifiers, which are then able to profit from the latent inter-class correlations. Our method scales with respect to the original feature dimension and can be used with high-dimensional image descriptors such as the Fisher Vector. Furthermore, it consistently outperforms the current state of the art on the SUN397 scene classification benchmark with varying amounts of training data.
Cite
Text
Lapin et al. "Scalable Multitask Representation Learning for Scene Classification." Conference on Computer Vision and Pattern Recognition, 2014. doi:10.1109/CVPR.2014.186Markdown
[Lapin et al. "Scalable Multitask Representation Learning for Scene Classification." Conference on Computer Vision and Pattern Recognition, 2014.](https://mlanthology.org/cvpr/2014/lapin2014cvpr-scalable/) doi:10.1109/CVPR.2014.186BibTeX
@inproceedings{lapin2014cvpr-scalable,
title = {{Scalable Multitask Representation Learning for Scene Classification}},
author = {Lapin, Maksim and Schiele, Bernt and Hein, Matthias},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2014},
doi = {10.1109/CVPR.2014.186},
url = {https://mlanthology.org/cvpr/2014/lapin2014cvpr-scalable/}
}