ModelDiff: A Framework for Comparing Learning Algorithms
Abstract
We study the problem of (learning) algorithm comparison, where the goal is to find differences between models trained with two different learning algorithms. We begin by formalizing this goal as one of finding distinguishing feature transformations, i.e., input transformations that change the predictions of models trained with one learning algorithm but not the other. We then present ModelDiff, a method that leverages the datamodels framework (Ilyas et al., 2022) to compare learning algorithms based on how they use their training data. We demonstrate ModelDiff through three case studies, comparing models trained with/without data augmentation, with/without pre-training, and with different SGD hyperparameters.
Cite
Text
Shah et al. "ModelDiff: A Framework for Comparing Learning Algorithms." International Conference on Machine Learning, 2023.Markdown
[Shah et al. "ModelDiff: A Framework for Comparing Learning Algorithms." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/shah2023icml-modeldiff/)BibTeX
@inproceedings{shah2023icml-modeldiff,
title = {{ModelDiff: A Framework for Comparing Learning Algorithms}},
author = {Shah, Harshay and Park, Sung Min and Ilyas, Andrew and Madry, Aleksander},
booktitle = {International Conference on Machine Learning},
year = {2023},
pages = {30646-30688},
volume = {202},
url = {https://mlanthology.org/icml/2023/shah2023icml-modeldiff/}
}