VECA: A Method for Detecting Overfitting in Neural Networks (Student Abstract)
Abstract
Despite their widespread applications, deep neural networks often tend to overfit the training data. Here, we propose a measure called VECA (Variance of Eigenvalues of Covariance matrix of Activation matrix) and demonstrate that VECA is a good predictor of networks' generalization performance during the training process. Experiments performed on fully-connected networks and convolutional neural networks trained on benchmark image datasets show a strong correlation between test loss and VECA, which suggest that we can calculate the VECA to estimate generalization performance without sacrificing training data to be used as a validation set.
Cite
Text
Ge et al. "VECA: A Method for Detecting Overfitting in Neural Networks (Student Abstract)." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I10.7167Markdown
[Ge et al. "VECA: A Method for Detecting Overfitting in Neural Networks (Student Abstract)." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/ge2020aaai-veca/) doi:10.1609/AAAI.V34I10.7167BibTeX
@inproceedings{ge2020aaai-veca,
title = {{VECA: A Method for Detecting Overfitting in Neural Networks (Student Abstract)}},
author = {Ge, Liangzhu and Hou, Yuexian and Jiang, Yaju and Yao, Shuai and Yang, Chao},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2020},
pages = {13791-13792},
doi = {10.1609/AAAI.V34I10.7167},
url = {https://mlanthology.org/aaai/2020/ge2020aaai-veca/}
}