Learning Deep ℓ0 Encoders

Abstract

Despite its nonconvex nature, ℓ0 sparse approximation is desirable in many theoretical and application cases. We study the ℓ0 sparse approximation problem with the tool of deep learning, by proposing Deep ℓ0 Encoders. Two typical forms, the ℓ0 regularized problem and the M-sparse problem, are investigated. Based on solid iterative algorithms, we model them as feed-forward neural networks, through introducing novel neurons and pooling functions. Enforcing such structural priors acts as an effective network regularization. The deep encoders also enjoy faster inference, larger learning capacity, and better scalability compared to conventional sparse coding solutions. Furthermore, under task-driven losses, the models can be conveniently optimized from end to end. Numerical results demonstrate the impressive performances of the proposed encoders.

Cite

Text

Wang et al. "Learning Deep ℓ0 Encoders." AAAI Conference on Artificial Intelligence, 2016. doi:10.1609/AAAI.V30I1.10198

Markdown

[Wang et al. "Learning Deep ℓ0 Encoders." AAAI Conference on Artificial Intelligence, 2016.](https://mlanthology.org/aaai/2016/wang2016aaai-learning-a/) doi:10.1609/AAAI.V30I1.10198

BibTeX

@inproceedings{wang2016aaai-learning-a,
  title     = {{Learning Deep ℓ0 Encoders}},
  author    = {Wang, Zhangyang and Ling, Qing and Huang, Thomas S.},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2016},
  pages     = {2194-2200},
  doi       = {10.1609/AAAI.V30I1.10198},
  url       = {https://mlanthology.org/aaai/2016/wang2016aaai-learning-a/}
}