Matrix Exponential Gradient Updates for On-Line Learning and Bregman Projection
Abstract
We address the problem of learning a symmetric positive definite matrix. The central issue is to design parameter updates that preserve positive definiteness. Our updates are motivated with the von Neumann diver- gence. Rather than treating the most general case, we focus on two key applications that exemplify our methods: On-line learning with a simple square loss and finding a symmetric positive definite matrix subject to symmetric linear constraints. The updates generalize the Exponentiated Gradient (EG) update and AdaBoost, respectively: the parameter is now a symmetric positive definite matrix of trace one instead of a probability vector (which in this context is a diagonal positive definite matrix with trace one). The generalized updates use matrix logarithms and exponen- tials to preserve positive definiteness. Most importantly, we show how the analysis of each algorithm generalizes to the non-diagonal case. We apply both new algorithms, called the Matrix Exponentiated Gradient (MEG) update and DefiniteBoost, to learn a kernel matrix from distance measurements.
Cite
Text
Tsuda et al. "Matrix Exponential Gradient Updates for On-Line Learning and Bregman Projection." Neural Information Processing Systems, 2004.Markdown
[Tsuda et al. "Matrix Exponential Gradient Updates for On-Line Learning and Bregman Projection." Neural Information Processing Systems, 2004.](https://mlanthology.org/neurips/2004/tsuda2004neurips-matrix/)BibTeX
@inproceedings{tsuda2004neurips-matrix,
title = {{Matrix Exponential Gradient Updates for On-Line Learning and Bregman Projection}},
author = {Tsuda, Koji and Rätsch, Gunnar and Warmuth, Manfred K.},
booktitle = {Neural Information Processing Systems},
year = {2004},
pages = {1425-1432},
url = {https://mlanthology.org/neurips/2004/tsuda2004neurips-matrix/}
}