Analyzing Generalization of Neural Networks Through Loss Path Kernels
Abstract
Deep neural networks have been increasingly used in real-world applications, making it critical to ensure their ability to adapt to new, unseen data. In this paper, we study the generalization capability of neural networks trained with (stochastic) gradient flow. We establish a new connection between the loss dynamics of gradient flow and general kernel machines by proposing a new kernel, called loss path kernel. This kernel measures the similarity between two data points by evaluating the agreement between loss gradients along the path determined by the gradient flow. Based on this connection, we derive a new generalization upper bound that applies to general neural network architectures. This new bound is tight and strongly correlated with the true generalization error. We apply our results to guide the design of neural architecture search (NAS) and demonstrate favorable performance compared with state-of-the-art NAS algorithms through numerical experiments.
Cite
Text
Chen et al. "Analyzing Generalization of Neural Networks Through Loss Path Kernels." Neural Information Processing Systems, 2023.Markdown
[Chen et al. "Analyzing Generalization of Neural Networks Through Loss Path Kernels." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/chen2023neurips-analyzing/)BibTeX
@inproceedings{chen2023neurips-analyzing,
title = {{Analyzing Generalization of Neural Networks Through Loss Path Kernels}},
author = {Chen, Yilan and Huang, Wei and Wang, Hao and Loh, Charlotte and Srivastava, Akash and Nguyen, Lam and Weng, Lily},
booktitle = {Neural Information Processing Systems},
year = {2023},
url = {https://mlanthology.org/neurips/2023/chen2023neurips-analyzing/}
}