Efficient Sketches for Training Data Attribution and Studying the Loss Landscape

Abstract

The study of modern machine learning models often necessitates storing vast quantities of gradients or Hessian vector products (HVPs). Traditional sketching methods struggle to scale under these memory constraints. We present a novel framework for scalable gradient and HVP sketching, tailored for modern hardware. We provide theoretical guarantees and demonstrate the power of our methods in applications like training data attribution, Hessian spectrum analysis, and intrinsic dimension computation for pre-trained language models. Our work sheds new light on the behavior of pre-trained language models, challenging assumptions about their intrinsic dimensionality and Hessian properties.

Cite

Text

Schioppa. "Efficient Sketches for Training Data Attribution and Studying the Loss Landscape." Neural Information Processing Systems, 2024. doi:10.52202/079017-1190

Markdown

[Schioppa. "Efficient Sketches for Training Data Attribution and Studying the Loss Landscape." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/schioppa2024neurips-efficient/) doi:10.52202/079017-1190

BibTeX

@inproceedings{schioppa2024neurips-efficient,
  title     = {{Efficient Sketches for Training Data Attribution and Studying the Loss Landscape}},
  author    = {Schioppa, Andrea},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-1190},
  url       = {https://mlanthology.org/neurips/2024/schioppa2024neurips-efficient/}
}