Generalize Across Tasks: Efficient Algorithms for Linear Representation Learning
Abstract
We present provable algorithms for learning linear representations which are trained in a supervised fashion across a number of tasks. Furthermore, whereas previous methods in the context of multitask learning only allow for generalization within tasks that have already been observed, our representations are both efficiently learnable and accompanied by generalization guarantees to unseen tasks. Our method relies on a certain convex relaxation of a non-convex problem, making it amenable to online learning procedures. We further ensure that a low-rank representation is maintained, and we allow for various trade-offs between sample complexity and per-iteration cost, depending on the choice of algorithm.
Cite
Text
Bullins et al. "Generalize Across Tasks: Efficient Algorithms for Linear Representation Learning." Proceedings of the 30th International Conference on Algorithmic Learning Theory, 2019.Markdown
[Bullins et al. "Generalize Across Tasks: Efficient Algorithms for Linear Representation Learning." Proceedings of the 30th International Conference on Algorithmic Learning Theory, 2019.](https://mlanthology.org/alt/2019/bullins2019alt-generalize/)BibTeX
@inproceedings{bullins2019alt-generalize,
title = {{Generalize Across Tasks: Efficient Algorithms for Linear Representation Learning}},
author = {Bullins, Brian and Hazan, Elad and Kalai, Adam and Livni, Roi},
booktitle = {Proceedings of the 30th International Conference on Algorithmic Learning Theory},
year = {2019},
pages = {235-246},
volume = {98},
url = {https://mlanthology.org/alt/2019/bullins2019alt-generalize/}
}