Multivariate-Information Adversarial Ensemble for Scalable Joint Distribution Matching
Abstract
A broad range of cross-$m$-domain generation researches boil down to matching a joint distribution by deep generative models (DGMs). Hitherto algorithms excel in pairwise domains while as $m$ increases, remain struggling to scale themselves to fit a joint distribution. In this paper, we propose a domain-scalable DGM, i.e., MMI-ALI for $m$-domain joint distribution matching. As an $m$-domain ensemble model of ALIs (Dumoulin et al., 2016), MMI-ALI is adversarially trained with maximizing Multivariate Mutual Information (MMI) w.r.t. joint variables of each pair of domains and their shared feature. The negative MMIs are upper bounded by a series of feasible losses provably leading to matching $m$-domain joint distributions. MMI-ALI linearly scales as $m$ increases and thus, strikes a right balance between efficacy and scalability. We evaluate MMI-ALI in diverse challenging $m$-domain scenarios and verify its superiority.
Cite
Text
Chen et al. "Multivariate-Information Adversarial Ensemble for Scalable Joint Distribution Matching." International Conference on Machine Learning, 2019.Markdown
[Chen et al. "Multivariate-Information Adversarial Ensemble for Scalable Joint Distribution Matching." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/chen2019icml-multivariateinformation/)BibTeX
@inproceedings{chen2019icml-multivariateinformation,
title = {{Multivariate-Information Adversarial Ensemble for Scalable Joint Distribution Matching}},
author = {Chen, Ziliang and Yang, Zhanfu and Wang, Xiaoxi and Liang, Xiaodan and Yan, Xiaopeng and Li, Guanbin and Lin, Liang},
booktitle = {International Conference on Machine Learning},
year = {2019},
pages = {1112-1121},
volume = {97},
url = {https://mlanthology.org/icml/2019/chen2019icml-multivariateinformation/}
}