Lifelong Optimization with Low Regret
Abstract
In this work, we study a problem arising from two lines of works: online optimization and lifelong learning. In the problem, there is a sequence of tasks arriving sequentially, and within each task, we have to make decisions one after one and then suffer corresponding losses. The tasks are related as they share some common representation, but they are different as each requires a different predictor on top of the representation. As learning a representation is usually costly in lifelong learning scenarios, the goal is to learn it continuously through time across different tasks, making the learning of later tasks easier than previous ones. We provide such learning algorithms with good regret bounds which can be seen as natural generalization of prior works on online optimization.
Cite
Text
Wu et al. "Lifelong Optimization with Low Regret." Artificial Intelligence and Statistics, 2019.Markdown
[Wu et al. "Lifelong Optimization with Low Regret." Artificial Intelligence and Statistics, 2019.](https://mlanthology.org/aistats/2019/wu2019aistats-lifelong/)BibTeX
@inproceedings{wu2019aistats-lifelong,
title = {{Lifelong Optimization with Low Regret}},
author = {Wu, Yi-Shan and Wang, Po-An and Lu, Chi-Jen},
booktitle = {Artificial Intelligence and Statistics},
year = {2019},
pages = {448-456},
volume = {89},
url = {https://mlanthology.org/aistats/2019/wu2019aistats-lifelong/}
}