Sketchy Moment Matching: Toward Fast and Provable Data Selection for Finetuning
Abstract
We revisit data selection in a modern context of finetuning from a fundamental perspective. Extending the classical wisdom of variance minimization in low dimensions to high-dimensional finetuning, our generalization analysis unveils the importance of additionally reducing bias induced by low-rank approximation. Inspired by the variance-bias tradeoff in high dimensions from the theory, we introduce Sketchy Moment Matching (SkMM), a scalable data selection scheme with two stages. (i) First, the bias is controlled using gradient sketching that explores the finetuning parameter space for an informative low-dimensional subspace $\mathcal{S}$; (ii) then the variance is reduced over $\mathcal{S}$ via moment matching between the original and selected datasets. Theoretically, we show that gradient sketching is fast and provably accurate: selecting $n$ samples by reducing variance over $\mathcal{S}$ preserves the fast-rate generalization $O(\dim(\mathcal{S})/n)$, independent of the parameter dimension. Empirically, we concretize the variance-bias balance via synthetic experiments and demonstrate the effectiveness of SkMM for finetuning in real vision tasks.
Cite
Text
Dong et al. "Sketchy Moment Matching: Toward Fast and Provable Data Selection for Finetuning." Neural Information Processing Systems, 2024. doi:10.52202/079017-1374Markdown
[Dong et al. "Sketchy Moment Matching: Toward Fast and Provable Data Selection for Finetuning." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/dong2024neurips-sketchy/) doi:10.52202/079017-1374BibTeX
@inproceedings{dong2024neurips-sketchy,
title = {{Sketchy Moment Matching: Toward Fast and Provable Data Selection for Finetuning}},
author = {Dong, Yijun and Phan, Hoang and Pan, Xiang and Lei, Qi},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-1374},
url = {https://mlanthology.org/neurips/2024/dong2024neurips-sketchy/}
}