Tuning Frequency Bias in Neural Network Training with Nonuniform Data

Abstract

Small generalization errors of over-parameterized neural networks (NNs) can be partially explained by the frequency biasing phenomenon, where gradient-based algorithms minimize the low-frequency misfit before reducing the high-frequency residuals. Using the Neural Tangent Kernel (NTK), one can provide a theoretically rigorous analysis for training where data are drawn from constant or piecewise-constant probability densities. Since most training data sets are not drawn from such distributions, we use the NTK model and a data-dependent quadrature rule to theoretically quantify the frequency biasing of NN training given fully nonuniform data. By replacing the loss function with a carefully selected Sobolev norm, we can further amplify, dampen, counterbalance, or reverse the intrinsic frequency biasing in NN training.

Cite

Text

Yu et al. "Tuning Frequency Bias in Neural Network Training with Nonuniform Data." International Conference on Learning Representations, 2023.

Markdown

[Yu et al. "Tuning Frequency Bias in Neural Network Training with Nonuniform Data." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/yu2023iclr-tuning/)

BibTeX

@inproceedings{yu2023iclr-tuning,
  title     = {{Tuning Frequency Bias in Neural Network Training with Nonuniform Data}},
  author    = {Yu, Annan and Yang, Yunan and Townsend, Alex},
  booktitle = {International Conference on Learning Representations},
  year      = {2023},
  url       = {https://mlanthology.org/iclr/2023/yu2023iclr-tuning/}
}