Porcupine Neural Networks: Approximating Neural Network Landscapes

Abstract

Neural networks have been used prominently in several machine learning and statistics applications. In general, the underlying optimization of neural networks is non-convex which makes analyzing their performance challenging. In this paper, we take another approach to this problem by constraining the network such that the corresponding optimization landscape has good theoretical properties without significantly compromising performance. In particular, for two-layer neural networks we introduce Porcupine Neural Networks (PNNs) whose weight vectors are constrained to lie over a finite set of lines. We show that most local optima of PNN optimizations are global while we have a characterization of regions where bad local optimizers may exist. Moreover, our theoretical and empirical results suggest that an unconstrained neural network can be approximated using a polynomially-large PNN.

Cite

Text

Feizi et al. "Porcupine Neural Networks: Approximating Neural Network Landscapes." Neural Information Processing Systems, 2018.

Markdown

[Feizi et al. "Porcupine Neural Networks: Approximating Neural Network Landscapes." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/feizi2018neurips-porcupine/)

BibTeX

@inproceedings{feizi2018neurips-porcupine,
  title     = {{Porcupine Neural Networks: Approximating Neural Network Landscapes}},
  author    = {Feizi, Soheil and Javadi, Hamid and Zhang, Jesse and Tse, David},
  booktitle = {Neural Information Processing Systems},
  year      = {2018},
  pages     = {4831-4841},
  url       = {https://mlanthology.org/neurips/2018/feizi2018neurips-porcupine/}
}