When Is There a Free Matrix Lunch?
Abstract
The “no-free lunch theorems“ essentially say that for any two algorithms A and B, there are “as many“ targets (or priors over targets) for which A has lower expected loss than B as vice-versa. This can be made precise for certain loss functions [WM97]. This note concerns itself with cases where seemingly harder matrix versions of the algorithms have the same on-line loss bounds as the corresponding vector versions. So it seems that you get a free “matrix lunch“ (Our title is however not meant to imply that we have a technical refutation of the no-free lunch theorems).
Cite
Text
Warmuth. "When Is There a Free Matrix Lunch?." Annual Conference on Computational Learning Theory, 2007. doi:10.1007/978-3-540-72927-3_48Markdown
[Warmuth. "When Is There a Free Matrix Lunch?." Annual Conference on Computational Learning Theory, 2007.](https://mlanthology.org/colt/2007/warmuth2007colt-there/) doi:10.1007/978-3-540-72927-3_48BibTeX
@inproceedings{warmuth2007colt-there,
title = {{When Is There a Free Matrix Lunch?}},
author = {Warmuth, Manfred K.},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2007},
pages = {630-632},
doi = {10.1007/978-3-540-72927-3_48},
url = {https://mlanthology.org/colt/2007/warmuth2007colt-there/}
}