Machine Unlearning: Linear Filtration for Logit-Based Classifiers

Abstract

Recently enacted legislation grants individuals certain rights to decide in what fashion their personal data may be used and in particular a “right to be forgotten”. This poses a challenge to machine learning: how to proceed when an individual retracts permission to use data which has been part of the training process of a model? From this question emerges the field of machine unlearning, which could be broadly described as the investigation of how to “delete training data from models”. Our work complements this direction of research for the specific setting of class-wide deletion requests for classification models (e.g. deep neural networks). As a first step, we propose linear filtration as an intuitive, computationally efficient sanitization method. Our experiments demonstrate benefits in an adversarial setting over naive deletion schemes.

Cite

Text

Baumhauer et al. "Machine Unlearning: Linear Filtration for Logit-Based Classifiers." Machine Learning, 2022. doi:10.1007/S10994-022-06178-9

Markdown

[Baumhauer et al. "Machine Unlearning: Linear Filtration for Logit-Based Classifiers." Machine Learning, 2022.](https://mlanthology.org/mlj/2022/baumhauer2022mlj-machine/) doi:10.1007/S10994-022-06178-9

BibTeX

@article{baumhauer2022mlj-machine,
  title     = {{Machine Unlearning: Linear Filtration for Logit-Based Classifiers}},
  author    = {Baumhauer, Thomas and Schöttle, Pascal and Zeppelzauer, Matthias},
  journal   = {Machine Learning},
  year      = {2022},
  pages     = {3203-3226},
  doi       = {10.1007/S10994-022-06178-9},
  volume    = {111},
  url       = {https://mlanthology.org/mlj/2022/baumhauer2022mlj-machine/}
}