Adversary Is the Best Teacher: Towards Extremely Compact Neural Networks
Abstract
With neural networks rapidly becoming deeper, there emerges a need for compact models. One popular approach for this is to train small student networks to mimic larger and deeper teacher models, rather than directly learn from the training data. We propose a novel technique to train student-teacher networks without directly providing label information to the student. However, our main contribution is to learn how to learn from the teacher by a unique strategy---having the student compete with a discriminator.
Cite
Text
Prabhu et al. "Adversary Is the Best Teacher: Towards Extremely Compact Neural Networks." AAAI Conference on Artificial Intelligence, 2018. doi:10.1609/AAAI.V32I1.12182Markdown
[Prabhu et al. "Adversary Is the Best Teacher: Towards Extremely Compact Neural Networks." AAAI Conference on Artificial Intelligence, 2018.](https://mlanthology.org/aaai/2018/prabhu2018aaai-adversary/) doi:10.1609/AAAI.V32I1.12182BibTeX
@inproceedings{prabhu2018aaai-adversary,
title = {{Adversary Is the Best Teacher: Towards Extremely Compact Neural Networks}},
author = {Prabhu, Ameya and Krishna, Harish and Saha, Soham},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2018},
pages = {8137-8138},
doi = {10.1609/AAAI.V32I1.12182},
url = {https://mlanthology.org/aaai/2018/prabhu2018aaai-adversary/}
}