Tight Conditions for When the NTK Approximation Is Valid

Abstract

We study when the neural tangent kernel (NTK) approximation is valid for training a model with the square loss. In the lazy training setting of Chizat et al. 2019, we show that rescaling the model by a factor of $\alpha = O(T)$ suffices for the NTK approximation to be valid until training time $T$. Our bound is tight and improves on the previous bound of Chizat et al. 2019, which required a larger rescaling factor of $\alpha = O(T^2)$.

Cite

Text

Boix-Adserà and Littwin. "Tight Conditions for When the NTK Approximation Is Valid." Transactions on Machine Learning Research, 2023.

Markdown

[Boix-Adserà and Littwin. "Tight Conditions for When the NTK Approximation Is Valid." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/boixadsera2023tmlr-tight/)

BibTeX

@article{boixadsera2023tmlr-tight,
  title     = {{Tight Conditions for When the NTK Approximation Is Valid}},
  author    = {Boix-Adserà, Enric and Littwin, Etai},
  journal   = {Transactions on Machine Learning Research},
  year      = {2023},
  url       = {https://mlanthology.org/tmlr/2023/boixadsera2023tmlr-tight/}
}