Sparse Deep Stacking Network for Image Classification
Abstract
Sparse coding can learn good robust representation to noise and model more higher-order representation for image classification. However, the inference algorithm is computationally expensive even though the supervised signals are used to learn compact and discriminative dictionaries in sparse coding techniques. Luckily, a simplified neural network module (SNNM) has been proposed to directly learn the discriminative dictionaries for avoiding the expensive inference. But the SNNM module ignores the sparse representations. Therefore, we propose a sparse SNNM module by adding the mixed-norm regularization (l1/l2 norm). The sparse SNNM modules are further stacked to build a sparse deep stacking network (S-DSN). In the experiments, we evaluate S-DSN with four databases, including Extended YaleB, AR, 15 scene and Caltech101. Experimental results show that our model outperforms related classification methods with only a linear classifier. It is worth noting that we reach 98.8% recognition accuracy on 15 scene.
Cite
Text
Li et al. "Sparse Deep Stacking Network for Image Classification." AAAI Conference on Artificial Intelligence, 2015. doi:10.1609/AAAI.V29I1.9786Markdown
[Li et al. "Sparse Deep Stacking Network for Image Classification." AAAI Conference on Artificial Intelligence, 2015.](https://mlanthology.org/aaai/2015/li2015aaai-sparse/) doi:10.1609/AAAI.V29I1.9786BibTeX
@inproceedings{li2015aaai-sparse,
title = {{Sparse Deep Stacking Network for Image Classification}},
author = {Li, Jun and Chang, Heyou and Yang, Jian},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2015},
pages = {3804-3810},
doi = {10.1609/AAAI.V29I1.9786},
url = {https://mlanthology.org/aaai/2015/li2015aaai-sparse/}
}