Learning the Dependency Structure of Latent Factors
Abstract
In this paper, we study latent factor models with the dependency structure in the latent space. We propose a general learning framework which induces sparsity on the undirected graphical model imposed on the vector of latent factors. A novel latent factor model SLFA is then proposed as a matrix factorization problem with a special regularization term that encourages collaborative reconstruction. The main benefit (novelty) of the model is that we can simultaneously learn the lower-dimensional representation for data and model the pairwise relationships between latent factors explicitly. An on-line learning algorithm is devised to make the model feasible for large-scale learning problems. Experimental results on two synthetic data and two real-world data sets demonstrate that pairwise relationships and latent factors learned by our model provide a more structured way of exploring high-dimensional data, and the learned representations achieve the state-of-the-art classification performance.
Cite
Text
He et al. "Learning the Dependency Structure of Latent Factors." Neural Information Processing Systems, 2012.Markdown
[He et al. "Learning the Dependency Structure of Latent Factors." Neural Information Processing Systems, 2012.](https://mlanthology.org/neurips/2012/he2012neurips-learning/)BibTeX
@inproceedings{he2012neurips-learning,
title = {{Learning the Dependency Structure of Latent Factors}},
author = {He, Yunlong and Qi, Yanjun and Kavukcuoglu, Koray and Park, Haesun},
booktitle = {Neural Information Processing Systems},
year = {2012},
pages = {2366-2374},
url = {https://mlanthology.org/neurips/2012/he2012neurips-learning/}
}