Provable Methods for Training Neural Networks with Sparse Connectivity
Abstract
We provide novel guaranteed approaches for training feedforward neural networks with sparse connectivity. We leverage on the techniques developed previously for learning linear networks and show that they can also be effectively adopted to learn non-linear networks. We operate on the moments involving label and the score function of the input, and show that their factorization provably yields the weight matrix of the first layer of a deep network under mild conditions. In practice, the output of our method can be employed as effective initializers for gradient descent.
Cite
Text
Sedghi and Anandkumar. "Provable Methods for Training Neural Networks with Sparse Connectivity." International Conference on Learning Representations, 2015.Markdown
[Sedghi and Anandkumar. "Provable Methods for Training Neural Networks with Sparse Connectivity." International Conference on Learning Representations, 2015.](https://mlanthology.org/iclr/2015/sedghi2015iclr-provable/)BibTeX
@inproceedings{sedghi2015iclr-provable,
title = {{Provable Methods for Training Neural Networks with Sparse Connectivity}},
author = {Sedghi, Hanie and Anandkumar, Anima},
booktitle = {International Conference on Learning Representations},
year = {2015},
url = {https://mlanthology.org/iclr/2015/sedghi2015iclr-provable/}
}