Behind the Scenes of Gradient Descent: A Trajectory Analysis via Basis Function Decomposition
Abstract
This work analyzes the solution trajectory of gradient-based algorithms via a novel basis function decomposition. We show that, although solution trajectories of gradient-based algorithms may vary depending on the learning task, they behave almost monotonically when projected onto an appropriate orthonormal function basis. Such projection gives rise to a basis function decomposition of the solution trajectory. Theoretically, we use our proposed basis function decomposition to establish the convergence of gradient descent (GD) on several representative learning tasks. In particular, we improve the convergence of GD on symmetric matrix factorization and provide a completely new convergence result for the orthogonal symmetric tensor decomposition. Empirically, we illustrate the promise of our proposed framework on realistic deep neural networks (DNNs) across different architectures, gradient-based solvers, and datasets. Our key finding is that gradient-based algorithms monotonically learn the coefficients of a particular orthonormal function basis of DNNs defined as the eigenvectors of the conjugate kernel after training.
Cite
Text
Ma et al. "Behind the Scenes of Gradient Descent: A Trajectory Analysis via Basis Function Decomposition." International Conference on Learning Representations, 2023.Markdown
[Ma et al. "Behind the Scenes of Gradient Descent: A Trajectory Analysis via Basis Function Decomposition." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/ma2023iclr-behind/)BibTeX
@inproceedings{ma2023iclr-behind,
title = {{Behind the Scenes of Gradient Descent: A Trajectory Analysis via Basis Function Decomposition}},
author = {Ma, Jianhao and Guo, Lingjun and Fattahi, Salar},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/ma2023iclr-behind/}
}