Convex Sparse Coding, Subspace Learning, and Semi-Supervised Extensions
Abstract
Automated feature discovery is a fundamental problem in machine learning. Although classical feature discovery methods do not guarantee optimal solutions in general, it has been recently noted that certain subspace learning and sparse coding problems can be solved efficiently, provided the number of features is not restricted a priori. We provide an extended characterization of this optimality result and describe the nature of the solutions under an expanded set of practical contexts. In particular, we apply the framework to a semi-supervised learning problem, and demonstrate that feature discovery can co-occur with input reconstruction and supervised training while still admitting globally optimal solutions. A comparison to existing semi-supervised feature discovery methods shows improved generalization and efficiency.
Cite
Text
Zhang et al. "Convex Sparse Coding, Subspace Learning, and Semi-Supervised Extensions." AAAI Conference on Artificial Intelligence, 2011. doi:10.1609/AAAI.V25I1.7935Markdown
[Zhang et al. "Convex Sparse Coding, Subspace Learning, and Semi-Supervised Extensions." AAAI Conference on Artificial Intelligence, 2011.](https://mlanthology.org/aaai/2011/zhang2011aaai-convex/) doi:10.1609/AAAI.V25I1.7935BibTeX
@inproceedings{zhang2011aaai-convex,
title = {{Convex Sparse Coding, Subspace Learning, and Semi-Supervised Extensions}},
author = {Zhang, Xinhua and Yu, Yaoliang and White, Martha and Huang, Ruitong and Schuurmans, Dale},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2011},
pages = {567-573},
doi = {10.1609/AAAI.V25I1.7935},
url = {https://mlanthology.org/aaai/2011/zhang2011aaai-convex/}
}