Understanding L4-Based Dictionary Learning: Interpretation, Stability, and Robustness

Abstract

Recently, the $\ell^4$-norm maximization has been proposed to solve the sparse dictionary learning (SDL) problem. The simple MSP (matching, stretching, and projection) algorithm proposed by \cite{zhai2019a} has proved surprisingly efficient and effective. This paper aims to better understand this algorithm from its strong geometric and statistical connections with the classic PCA and ICA, as well as their associated fixed-point style algorithms. Such connections provide a unified way of viewing problems that pursue {\em principal}, {\em independent}, or {\em sparse} components of high-dimensional data. Our studies reveal additional good properties of $\ell^4$-maximization: not only is the MSP algorithm for sparse coding insensitive to small noise, but it is also robust to outliers and resilient to sparse corruptions. We provide statistical justification for such inherently nice properties. To corroborate the theoretical analysis, we also provide extensive and compelling experimental evidence with both synthetic data and real images.

Cite

Text

Zhai et al. "Understanding L4-Based Dictionary Learning: Interpretation, Stability, and Robustness." International Conference on Learning Representations, 2020.

Markdown

[Zhai et al. "Understanding L4-Based Dictionary Learning: Interpretation, Stability, and Robustness." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/zhai2020iclr-understanding/)

BibTeX

@inproceedings{zhai2020iclr-understanding,
  title     = {{Understanding L4-Based Dictionary Learning: Interpretation, Stability, and Robustness}},
  author    = {Zhai, Yuexiang and Mehta, Hermish and Zhou, Zhengyuan and Ma, Yi},
  booktitle = {International Conference on Learning Representations},
  year      = {2020},
  url       = {https://mlanthology.org/iclr/2020/zhai2020iclr-understanding/}
}