Estimating Informativeness of Samples with Smooth Unique Information

Abstract

We define a notion of information that an individual sample provides to the training of a neural network, and we specialize it to measure both how much a sample informs the final weights and how much it informs the function computed by the weights. Though related, we show that these quantities have a qualitatively different behavior. We give efficient approximations of these quantities using a linearized network and demonstrate empirically that the approximation is accurate for real-world architectures, such as pre-trained ResNets. We apply these measures to several problems, such as dataset summarization, analysis of under-sampled classes, comparison of informativeness of different data sources, and detection of adversarial and corrupted examples. Our work generalizes existing frameworks, but enjoys better computational properties for heavily over-parametrized models, which makes it possible to apply it to real-world networks.

Cite

Text

Harutyunyan et al. "Estimating Informativeness of Samples with Smooth Unique Information." International Conference on Learning Representations, 2021.

Markdown

[Harutyunyan et al. "Estimating Informativeness of Samples with Smooth Unique Information." International Conference on Learning Representations, 2021.](https://mlanthology.org/iclr/2021/harutyunyan2021iclr-estimating/)

BibTeX

@inproceedings{harutyunyan2021iclr-estimating,
  title     = {{Estimating Informativeness of Samples with Smooth Unique Information}},
  author    = {Harutyunyan, Hrayr and Achille, Alessandro and Paolini, Giovanni and Majumder, Orchid and Ravichandran, Avinash and Bhotika, Rahul and Soatto, Stefano},
  booktitle = {International Conference on Learning Representations},
  year      = {2021},
  url       = {https://mlanthology.org/iclr/2021/harutyunyan2021iclr-estimating/}
}