Dimensionality Reduction with Generalized Linear Models
Abstract
In this paper, we propose a general dimensionality reduction method for data generated from a very broad family of distributions and nonlinear functions based on the generalized linear model, called Generalized Linear Principal Component Analysis (GLPCA). Data of different domains often have very different structures. These data can be modeled by different distributions and reconstruction functions. For example, real valued data can be modeled by the Gaussian distribution with a linear reconstruction function, whereas binary valued data may be more appropriately modeled by the Bernoulli distribution with a logit or probit function. Based on general linear models, we propose a unified framework for extracting features from data of different domains. A general optimization algorithm based on natural gradient ascent on distribution manifold is proposed for obtaining the maximum likelihood solutions. We also present some specific algorithms derived from this framework to deal with specific data modeling problems such as document modeling. Experimental results of these algorithms on several data sets are shown for the validation of GLPCA.
Cite
Text
Chen et al. "Dimensionality Reduction with Generalized Linear Models." International Joint Conference on Artificial Intelligence, 2013.Markdown
[Chen et al. "Dimensionality Reduction with Generalized Linear Models." International Joint Conference on Artificial Intelligence, 2013.](https://mlanthology.org/ijcai/2013/chen2013ijcai-dimensionality/)BibTeX
@inproceedings{chen2013ijcai-dimensionality,
title = {{Dimensionality Reduction with Generalized Linear Models}},
author = {Chen, Mo and Li, Wei and Wang, Xiaogang and Zhang, Wei},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2013},
pages = {1267-1272},
url = {https://mlanthology.org/ijcai/2013/chen2013ijcai-dimensionality/}
}