Invariant Representations Through Adversarial Forgetting

Abstract

We propose a novel approach to achieving invariance for deep neural networks in the form of inducing amnesia to unwanted factors of data through a new adversarial forgetting mechanism. We show that the forgetting mechanism serves as an information-bottleneck, which is manipulated by the adversarial training to learn invariance to unwanted factors. Empirical results show that the proposed framework achieves state-of-the-art performance at learning invariance in both nuisance and bias settings on a diverse collection of datasets and tasks.

Cite

Text

Jaiswal et al. "Invariant Representations Through Adversarial Forgetting." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I04.5850

Markdown

[Jaiswal et al. "Invariant Representations Through Adversarial Forgetting." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/jaiswal2020aaai-invariant/) doi:10.1609/AAAI.V34I04.5850

BibTeX

@inproceedings{jaiswal2020aaai-invariant,
  title     = {{Invariant Representations Through Adversarial Forgetting}},
  author    = {Jaiswal, Ayush and Moyer, Daniel and Steeg, Greg Ver and AbdAlmageed, Wael and Natarajan, Premkumar},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2020},
  pages     = {4272-4279},
  doi       = {10.1609/AAAI.V34I04.5850},
  url       = {https://mlanthology.org/aaai/2020/jaiswal2020aaai-invariant/}
}