Convergence Analysis of Kernel Canonical Correlation Analysis: Theory and Practice
Abstract
Canonical Correlation Analysis is a technique for finding pairs of basis vectors that maximise the correlation of a set of paired variables, these pairs can be considered as two views of the same object. This paper provides a convergence analysis of Canonical Correlation Analysis by defining a pattern function that captures the degree to which the features from the two views are similar. We analyse the convergence using Rademacher complexity, hence deriving the error bound for new data. The analysis provides further justification for the regularisation of kernel Canonical Correlation Analysis and is corroborated by experiments on real world data.
Cite
Text
Hardoon and Shawe-Taylor. "Convergence Analysis of Kernel Canonical Correlation Analysis: Theory and Practice." Machine Learning, 2009. doi:10.1007/S10994-008-5085-3Markdown
[Hardoon and Shawe-Taylor. "Convergence Analysis of Kernel Canonical Correlation Analysis: Theory and Practice." Machine Learning, 2009.](https://mlanthology.org/mlj/2009/hardoon2009mlj-convergence/) doi:10.1007/S10994-008-5085-3BibTeX
@article{hardoon2009mlj-convergence,
title = {{Convergence Analysis of Kernel Canonical Correlation Analysis: Theory and Practice}},
author = {Hardoon, David R. and Shawe-Taylor, John},
journal = {Machine Learning},
year = {2009},
pages = {23-38},
doi = {10.1007/S10994-008-5085-3},
volume = {74},
url = {https://mlanthology.org/mlj/2009/hardoon2009mlj-convergence/}
}