Scaling Gaussian Processes for Learning Curve Prediction via Latent Kronecker Structure
Abstract
A key task in AutoML is to model learning curves of machine learning models jointly as a function of model hyper-parameters and training progression. While Gaussian processes (GPs) are suitable for this task, naive GPs require $\mathcal{O}(n^3m^3)$ time and $\mathcal{O}(n^2 m^2)$ space for $n$ hyper-parameter configurations and $\mathcal{O}(m)$ learning curve observations per hyper-parameter. Efficient inference via Kronecker structure is typically incompatible with early-stopping due to missing learning curve values. We impose $\textit{latent Kronecker structure}$ to leverage efficient product kernels while handling missing values. In particular, we interpret the joint covariance matrix of observed values as the projection of a latent Kronecker product. Combined with iterative linear solvers and structured matrix-vector multiplication, our method only requires $\mathcal{O}(n^3 + m^3)$ time and $\mathcal{O}(n^2 + m^2)$ space. We show that our GP model can match the performance of a Transformer on a learning curve prediction task.
Cite
Text
Lin et al. "Scaling Gaussian Processes for Learning Curve Prediction via Latent Kronecker Structure." NeurIPS 2024 Workshops: BDU, 2024.Markdown
[Lin et al. "Scaling Gaussian Processes for Learning Curve Prediction via Latent Kronecker Structure." NeurIPS 2024 Workshops: BDU, 2024.](https://mlanthology.org/neuripsw/2024/lin2024neuripsw-scaling/)BibTeX
@inproceedings{lin2024neuripsw-scaling,
title = {{Scaling Gaussian Processes for Learning Curve Prediction via Latent Kronecker Structure}},
author = {Lin, Jihao Andreas and Ament, Sebastian and Balandat, Maximilian and Bakshy, Eytan},
booktitle = {NeurIPS 2024 Workshops: BDU},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/lin2024neuripsw-scaling/}
}