Robustly Train Normalizing Flows via KL Divergence Regularization
Abstract
In this paper, we find that the training of Normalizing Flows (NFs) are easily affected by the outliers and a small number (or high dimensionality) of training samples. To solve this problem, we propose a Kullback–Leibler (KL) divergence regularization on the Jacobian matrix of NFs. We prove that such regularization is equivalent to adding a set of samples whose covariance matrix is the identity matrix to the training set. Thus, it reduces the negative influence of the outliers and the small sample number on the estimation of the covariance matrix, simultaneously. Therefore, our regularization makes the training of NFs robust. Ultimately, we evaluate the performance of NFs on out-of-distribution (OoD) detection tasks. The excellent results obtained demonstrate the effectiveness of the proposed regularization term. For example, with the help of the proposed regularization, the OoD detection score increases at most 30% compared with the one without the regularization.
Cite
Text
Song et al. "Robustly Train Normalizing Flows via KL Divergence Regularization." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I13.29426Markdown
[Song et al. "Robustly Train Normalizing Flows via KL Divergence Regularization." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/song2024aaai-robustly/) doi:10.1609/AAAI.V38I13.29426BibTeX
@inproceedings{song2024aaai-robustly,
title = {{Robustly Train Normalizing Flows via KL Divergence Regularization}},
author = {Song, Kun and Solozabal, Ruben and Li, Hao and Takác, Martin and Ren, Lu and Karray, Fakhri},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2024},
pages = {15047-15055},
doi = {10.1609/AAAI.V38I13.29426},
url = {https://mlanthology.org/aaai/2024/song2024aaai-robustly/}
}