Bridging Maximum Likelihood and Adversarial Learning via Α-Divergence
Abstract
Maximum likelihood (ML) and adversarial learning are two popular approaches for training generative models, and from many perspectives these techniques are complementary. ML learning encourages the capture of all data modes, and it is typically characterized by stable training. However, ML learning tends to distribute probability mass diffusely over the data space, e.g., yielding blurry synthetic images. Adversarial learning is well known to synthesize highly realistic natural images, despite practical challenges like mode dropping and delicate training. We propose an α-Bridge to unify the advantages of ML and adversarial learning, enabling the smooth transfer from one to the other via the α-divergence. We reveal that generalizations of the α-Bridge are closely related to approaches developed recently to regularize adversarial learning, providing insights into that prior work, and further understanding of why the α-Bridge performs well in practice.
Cite
Text
Zhao et al. "Bridging Maximum Likelihood and Adversarial Learning via Α-Divergence." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I04.6172Markdown
[Zhao et al. "Bridging Maximum Likelihood and Adversarial Learning via Α-Divergence." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/zhao2020aaai-bridging/) doi:10.1609/AAAI.V34I04.6172BibTeX
@inproceedings{zhao2020aaai-bridging,
title = {{Bridging Maximum Likelihood and Adversarial Learning via Α-Divergence}},
author = {Zhao, Miaoyun and Cong, Yulai and Dai, Shuyang and Carin, Lawrence},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2020},
pages = {6901-6908},
doi = {10.1609/AAAI.V34I04.6172},
url = {https://mlanthology.org/aaai/2020/zhao2020aaai-bridging/}
}