When Is a Convolutional Filter Easy to Learn?

Abstract

We analyze the convergence of (stochastic) gradient descent algorithm for learning a convolutional filter with Rectified Linear Unit (ReLU) activation function. Our analysis does not rely on any specific form of the input distribution and our proofs only use the definition of ReLU, in contrast with previous works that are restricted to standard Gaussian input. We show that (stochastic) gradient descent with random initialization can learn the convolutional filter in polynomial time and the convergence rate depends on the smoothness of the input distribution and the closeness of patches. To the best of our knowledge, this is the first recovery guarantee of gradient-based algorithms for convolutional filter on non-Gaussian input distributions. Our theory also justifies the two-stage learning rate strategy in deep neural networks. While our focus is theoretical, we also present experiments that justify our theoretical findings.

Cite

Text

Du et al. "When Is a Convolutional Filter Easy to Learn?." International Conference on Learning Representations, 2018.

Markdown

[Du et al. "When Is a Convolutional Filter Easy to Learn?." International Conference on Learning Representations, 2018.](https://mlanthology.org/iclr/2018/du2018iclr-convolutional/)

BibTeX

@inproceedings{du2018iclr-convolutional,
  title     = {{When Is a Convolutional Filter Easy to Learn?}},
  author    = {Du, Simon S. and Lee, Jason D. and Tian, Yuandong},
  booktitle = {International Conference on Learning Representations},
  year      = {2018},
  url       = {https://mlanthology.org/iclr/2018/du2018iclr-convolutional/}
}