Provable Guarantees for Neural Networks via Gradient Feature Learning

Abstract

Neural networks have achieved remarkable empirical performance, while the current theoretical analysis is not adequate for understanding their success, e.g., the Neural Tangent Kernel approach fails to capture their key feature learning ability, while recent analyses on feature learning are typically problem-specific. This work proposes a unified analysis framework for two-layer networks trained by gradient descent. The framework is centered around the principle of feature learning from gradients, and its effectiveness is demonstrated by applications in several prototypical problems, such as mixtures of Gaussians and parity functions.The framework also sheds light on interesting network learning phenomena such as feature learning beyond kernels and the lottery ticket hypothesis.

Cite

Text

Shi et al. "Provable Guarantees for Neural Networks via Gradient Feature Learning." Neural Information Processing Systems, 2023.

Markdown

[Shi et al. "Provable Guarantees for Neural Networks via Gradient Feature Learning." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/shi2023neurips-provable/)

BibTeX

@inproceedings{shi2023neurips-provable,
  title     = {{Provable Guarantees for Neural Networks via Gradient Feature Learning}},
  author    = {Shi, Zhenmei and Wei, Junyi and Liang, Yingyu},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/shi2023neurips-provable/}
}