Low-Rank Sparse Coding for Image Classification

Abstract

In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as lowrank, sparse linear combinations of codewords. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of orpopular coding and other state-of-theart methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-ofthe-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding [36].

Cite

Text

Zhang et al. "Low-Rank Sparse Coding for Image Classification." International Conference on Computer Vision, 2013. doi:10.1109/ICCV.2013.42

Markdown

[Zhang et al. "Low-Rank Sparse Coding for Image Classification." International Conference on Computer Vision, 2013.](https://mlanthology.org/iccv/2013/zhang2013iccv-lowrank/) doi:10.1109/ICCV.2013.42

BibTeX

@inproceedings{zhang2013iccv-lowrank,
  title     = {{Low-Rank Sparse Coding for Image Classification}},
  author    = {Zhang, Tianzhu and Ghanem, Bernard and Liu, Si and Xu, Changsheng and Ahuja, Narendra},
  booktitle = {International Conference on Computer Vision},
  year      = {2013},
  doi       = {10.1109/ICCV.2013.42},
  url       = {https://mlanthology.org/iccv/2013/zhang2013iccv-lowrank/}
}