Logit Perturbation
Abstract
Features, logits, and labels are the three primary data when a sample passes through a deep neural network. Feature perturbation and label perturbation receive increasing attention in recent years. They have been proven to be useful in various deep learning approaches. For example, (adversarial) feature perturbation can improve the robustness or even generalization capability of learned models. However, limited studies have explicitly explored for the perturbation of logit vectors. This work discusses several existing methods related to logit perturbation. Based on a unified viewpoint between positive/negative data augmentation and loss variations incurred by logit perturbation, a new method is proposed to explicitly learn to perturb logits. A comparative analysis is conducted for the perturbations used in our and existing methods. Extensive experiments on benchmark image classification data sets and their long-tail versions indicated the competitive performance of our learning method. In addition, existing methods can be further improved by utilizing our method.
Cite
Text
Li et al. "Logit Perturbation." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I2.20024Markdown
[Li et al. "Logit Perturbation." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/li2022aaai-logit/) doi:10.1609/AAAI.V36I2.20024BibTeX
@inproceedings{li2022aaai-logit,
title = {{Logit Perturbation}},
author = {Li, Mengyang and Su, Fengguang and Wu, Ou and Zhang, Ji},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2022},
pages = {1359-1366},
doi = {10.1609/AAAI.V36I2.20024},
url = {https://mlanthology.org/aaai/2022/li2022aaai-logit/}
}