Stochastic Optimization for Multiview Representation Learning Using Partial Least Squares

Abstract

Partial Least Squares (PLS) is a ubiquitous statistical technique for bilinear factor analysis. It is used in many data analysis, machine learning, and information retrieval applications to model the covariance structure between a pair of data matrices. In this paper, we consider PLS for representation learning in a multiview setting where we have more than one view in data at training time. Furthermore, instead of framing PLS as a problem about a fixed given data set, we argue that PLS should be studied as a stochastic optimization problem, especially in a "big data" setting, with the goal of optimizing a population objective based on sample. This view suggests using Stochastic Approximation (SA) approaches, such as Stochastic Gradient Descent (SGD) and enables a rigorous analysis of their benefits. In this paper, we develop SA approaches to PLS and provide iteration complexity bounds for the proposed algorithms.

Cite

Text

Arora et al. "Stochastic Optimization for Multiview Representation Learning Using Partial Least Squares." International Conference on Machine Learning, 2016.

Markdown

[Arora et al. "Stochastic Optimization for Multiview Representation Learning Using Partial Least Squares." International Conference on Machine Learning, 2016.](https://mlanthology.org/icml/2016/arora2016icml-stochastic/)

BibTeX

@inproceedings{arora2016icml-stochastic,
  title     = {{Stochastic Optimization for Multiview Representation Learning Using Partial Least Squares}},
  author    = {Arora, Raman and Mianjy, Poorya and Marinov, Teodor},
  booktitle = {International Conference on Machine Learning},
  year      = {2016},
  pages     = {1786-1794},
  volume    = {48},
  url       = {https://mlanthology.org/icml/2016/arora2016icml-stochastic/}
}