Support Discrimination Dictionary Learning for Image Classification
Abstract
Dictionary learning has been successfully applied in image classification. However, many dictionary learning methods that encode only a single image at a time while training, ignore correlation and other useful information contained within the entire training set. In this paper, we propose a new principle that uses the support of the coefficients to measure the similarity between the pairs of coefficients, instead of using Euclidian distance directly. More specifically, we proposed a support discrimination dictionary learning method, which finds a dictionary under which the coefficients of images from the same class have a common sparse structure while the size of the overlapped signal support of different classes is minimised. In addition, adopting a shared dictionary in a multi-task learning setting, this method can find the number and position of associated dictionary atoms for each class automatically by using structured sparsity on a group of images. The proposed model is extensively evaluated using various image datasets, and it shows superior performance to many state-of-the-art dictionary learning methods.
Cite
Text
Liu et al. "Support Discrimination Dictionary Learning for Image Classification." European Conference on Computer Vision, 2016. doi:10.1007/978-3-319-46475-6_24Markdown
[Liu et al. "Support Discrimination Dictionary Learning for Image Classification." European Conference on Computer Vision, 2016.](https://mlanthology.org/eccv/2016/liu2016eccv-support/) doi:10.1007/978-3-319-46475-6_24BibTeX
@inproceedings{liu2016eccv-support,
title = {{Support Discrimination Dictionary Learning for Image Classification}},
author = {Liu, Yang and Chen, Wei and Chen, Qingchao and Wassell, Ian J.},
booktitle = {European Conference on Computer Vision},
year = {2016},
pages = {375-390},
doi = {10.1007/978-3-319-46475-6_24},
url = {https://mlanthology.org/eccv/2016/liu2016eccv-support/}
}